Econstudentlog

A few diabetes papers of interest

i. Neurocognitive Functioning in Children and Adolescents at the Time of Type 1 Diabetes Diagnosis: Associations With Glycemic Control 1 Year After Diagnosis.

“Children and youth with type 1 diabetes are at risk for developing neurocognitive dysfunction, especially in the areas of psychomotor speed, attention/executive functioning, and visuomotor integration (1,2). Most research suggests that deficits emerge over time, perhaps in response to the cumulative effect of glycemic extremes (36). However, the idea that cognitive changes emerge gradually has been challenged (79). Ryan (9) argued that if diabetes has a cumulative effect on cognition, cognitive test performance should be positively correlated with illness duration. Yet he found comparable deficits in psychomotor speed (the most commonly noted area of deficit) in adolescents and young adults with illness duration ranging from 6 to 25 years. He therefore proposed a diathesis model in which cognitive declines in diabetes are especially likely to occur in more vulnerable patients, at crucial periods, in response to illness-related events (e.g., severe hyperglycemia) known to have an impact on the central nervous system (CNS) (8). This model accounts for the finding that cognitive deficits are more likely in children with early-onset diabetes, and for the accelerated cognitive aging seen in diabetic individuals later in life (7). A third hypothesized crucial period is the time leading up to diabetes diagnosis, during which severe fluctuations in blood glucose and persistent hyperglycemia often occur. Concurrent changes in blood-brain barrier permeability could result in a flood of glucose into the brain, with neurotoxic effects (9).”

“In the current study, we report neuropsychological test findings for children and adolescents tested within 3 days of diabetes diagnosis. The purpose of the study was to determine whether neurocognitive impairments are detectable at diagnosis, as predicted by the diathesis hypothesis. We hypothesized that performance on tests of psychomotor speed, visuomotor integration, and attention/executive functioning would be significantly below normative expectations, and that differences would be greater in children with earlier disease onset. We also predicted that diabetic ketoacidosis (DKA), a primary cause of diabetes-related neurological morbidity (12) and a likely proxy for severe peri-onset hyperglycemia, would be associated with poorer performance.”

“Charts were reviewed for 147 children/adolescents aged 5–18 years (mean = 10.4 ± 3.2 years) who completed a short neuropsychological screening during their inpatient hospitalization for new-onset type 1 diabetes, as part of a pilot clinical program intended to identify patients in need of further neuropsychological evaluation. Participants were patients at a large urban children’s hospital in the southwestern U.S. […] Compared with normative expectations, children/youth with type 1 diabetes performed significantly worse on GPD, GPN, VMI, and FAS (P < 0.0001 in all cases), with large decrements evident on all four measures (Fig. 1). A small but significant effect was also evident in DSB (P = 0.022). High incidence of impairment was evident on all neuropsychological tasks completed by older participants (aged 9–18 years) except DSF/DSB (Fig. 2).”

“Deficits in neurocognitive functioning were evident in children and adolescents within days of type 1 diabetes diagnosis. Participants performed >1 SD below normative expectations in bilateral psychomotor speed (GP) and 0.7–0.8 SDs below expected performance in visuomotor integration (VMI) and phonemic fluency (FAS). Incidence of impairment was much higher than normative expectations on all tasks except DSF/DSB. For example, >20% of youth were impaired in dominant hand fine-motor control, and >30% were impaired with their nondominant hand. These findings provide provisional support for Ryan’s hypothesis (79) that the peri-onset period may be a time of significant cognitive vulnerability.

Importantly, deficits were not evident on all measures. Performance on measures of attention/executive functioning (TMT-A, TMT-B, DSF, and DSB) was largely consistent with normative expectations, as was reading ability (WRAT-4), suggesting that the below-average performance in other areas was not likely due to malaise or fatigue. Depressive symptoms at diagnosis were associated with performance on TMT-B and FAS, but not on other measures. Thus, it seems unlikely that depressive symptoms accounted for the observed motor slowing.

Instead, the findings suggest that the visual-motor system may be especially vulnerable to early effects of type 1 diabetes. This interpretation is especially compelling given that psychomotor impairment is the most consistently reported long-term cognitive effect of type 1 diabetes. The sensitivity of the visual-motor system at diabetes diagnosis is consistent with a growing body of neuroimaging research implicating posterior white matter tracts and associated gray matter regions (particularly cuneus/precuneus) as areas of vulnerability in type 1 diabetes (3032). These regions form part of the neural system responsible for integrating visual inputs with motor outputs, and in adults with type 1 diabetes, structural pathology in these regions is directly correlated to performance on GP [grooved pegboard test] (30,31). Arbelaez et al. (33) noted that these brain areas form part of the “default network” (34), a system engaged during internally focused cognition that has high resting glucose metabolism and may be especially vulnerable to glucose variability.”

“It should be noted that previous studies (e.g., Northam et al. [3]) have not found evidence of neurocognitive dysfunction around the time of diabetes diagnosis. This may be due to study differences in measures, outcomes, and/or time frame. We know of no other studies that completed neuropsychological testing within days of diagnosis. Given our time frame, it is possible that our findings reflect transient effects rather than more permanent changes in the CNS. Contrary to predictions, we found no association between DKA at diagnosis and neurocognitive performance […] However, even transient effects could be considered potential indicators of CNS vulnerability. Neurophysiological changes at the time of diagnosis have been shown to persist under certain circumstances or for some patients. […] [Some] findings suggest that some individuals may be particularly susceptible to the effects of glycemic extremes on neurocognitive function, consistent with a large body of research in developmental neuroscience indicating individual differences in neurobiological vulnerability to adverse events. Thus, although it is possible that the neurocognitive impairments observed in our study might resolve with euglycemia, deficits at diagnosis could still be considered a potential marker of CNS vulnerability to metabolic perturbations (both acute and chronic).”

“In summary, this study provides the first demonstration that type 1 diabetes–associated neurocognitive impairment can be detected at the time of diagnosis, supporting the possibility that deficits arise secondary to peri-onset effects. Whether these effects are transient markers of vulnerability or represent more persistent changes in CNS awaits further study.”

ii. Association Between Impaired Cardiovascular Autonomic Function and Hypoglycemia in Patients With Type 1 Diabetes.

“Cardiovascular autonomic neuropathy (CAN) is a chronic complication of diabetes and an independent predictor of cardiovascular disease (CVD) morbidity and mortality (13). The mechanisms of CAN are complex and not fully understood. It can be assessed by simple cardiovascular reflex tests (CARTs) and heart rate variability (HRV) studies that were shown to be sensitive, noninvasive, and reproducible (3,4).”

“HbA1c fails to capture information on the daily fluctuations in blood glucose levels, termed glycemic variability (GV). Recent observations have fostered the notion that GV, independent of HbA1c, may confer an additional risk for the development of micro- and macrovascular diabetes complications (8,9). […] the relationship between GV and chronic complications, specifically CAN, in patients with type 1 diabetes has not been systematically studied. In addition, limited data exist on the relationship between hypoglycemic components of the GV and measures of CAN among subjects with type 1 diabetes (11,12). Therefore, we have designed a prospective study to evaluate the impact and the possible sustained effects of GV on measures of cardiac autonomic function and other cardiovascular complications among subjects with type 1 diabetes […] In the present communication, we report cross-sectional analyses at baseline between indices of hypoglycemic stress on measures of cardiac autonomic function.”

“The following measures of CAN were predefined as outcomes of interests and analyzed: expiration-to-inspiration ratio (E:I), Valsalva ratio, 30:15 ratios, low-frequency (LF) power (0.04 to 0.15 Hz), high-frequency (HF) power (0.15 to 0.4 Hz), and LF/HF at rest and during CARTs. […] We found that LBGI [low blood glucose index] and AUC [area under the curve] hypoglycemia were associated with reduced LF and HF power of HRV [heart rate variability], suggesting an impaired autonomic function, which was independent of glucose control as assessed by the HbA1c.”

“Our findings are in concordance with a recent report demonstrating attenuation of the baroreflex sensitivity and of the sympathetic response to various cardiovascular stressors after antecedent hypoglycemia among healthy subjects who were exposed to acute hypoglycemic stress (18). Similar associations […] were also reported in a small study of subjects with type 2 diabetes (19). […] higher GV and hypoglycemic stress may have an acute effect on modulating autonomic control with inducing a sympathetic/vagal imbalance and a blunting of the cardiac vagal control (18). The impairment in the normal counter-regulatory autonomic responses induced by hypoglycemia on the cardiovascular system could be important in healthy individuals but may be particularly detrimental in individuals with diabetes who have hitherto compromised cardiovascular function and/or subclinical CAN. In these individuals, hypoglycemia may also induce QT interval prolongation, increase plasma catecholamine levels, and lower serum potassium (19,20). In concert, these changes may lower the threshold for serious arrhythmia (19,20) and could result in an increased risk of cardiovascular events and sudden cardiac death. Conversely, the presence of CAN may increase the risk of hypoglycemia through hypoglycemia unawareness and subsequent impaired ability to restore euglycemia (21) through impaired sympathoadrenal response to hypoglycemia or delayed gastric emptying. […] A possible pathogenic role of GV/hypoglycemic stress on CAN development and progressions should be also considered. Prior studies in healthy and diabetic subjects have found that higher exposure to hypoglycemia reduces the counter-regulatory hormone (e.g., epinephrine, glucagon, and adrenocorticotropic hormone) and blunts autonomic nervous system responses to subsequent hypoglycemia (21). […] Our data […] suggest that wide glycemic fluctuations, particularly hypoglycemic stress, may increase the risk of CAN in patients with type 1 diabetes.”

“In summary, in this cohort of relatively young and uncomplicated patients with type 1 diabetes, GV and higher hypoglycemic stress were associated with impaired HRV reflective of sympathetic/parasympathetic dysfunction with potential important clinical consequences.”

iii. Elevated Levels of hs-CRP Are Associated With High Prevalence of Depression in Japanese Patients With Type 2 Diabetes: The Diabetes Distress and Care Registry at Tenri (DDCRT 6).

“In the last decade, several studies have been published that suggest a close association between diabetes and depression. Patients with diabetes have a high prevalence of depression (1) […] and a high prevalence of complications (3). In addition, depression is associated with mortality in these patients (4). […] Because of this strong association, several recent studies have suggested the possibility of a common biological pathway such as inflammation as an underlying mechanism of the association between depression and diabetes (5). […] Multiple mechanisms are involved in the association between diabetes and inflammation, including modulation of lipolysis, alteration of glucose uptake by adipose tissue, and an indirect mechanism involving an increase in free fatty acid levels blocking the insulin signaling pathway (10). Psychological stress can also cause inflammation via innervation of cytokine-producing cells and activation of the sympathetic nervous systems and adrenergic receptors on macrophages (11). Depression enhances the production of inflammatory cytokines (1214). Overproduction of inflammatory cytokines may stimulate corticotropin-releasing hormone production, a mechanism that leads to hypothalamic-pituitary axis activity. Conversely, cytokines induce depressive-like behaviors; in studies where healthy participants were given endotoxin infusions to trigger cytokine release, the participants developed classic depressive symptoms (15). Based on this evidence, it could be hypothesized that inflammation is the common biological pathway underlying the association between diabetes and depression.”

“[F]ew studies have examined the clinical role of inflammation and depression as biological correlates in patients with diabetes. […] In this study, we hypothesized that high CRP [C-reactive protein] levels were associated with the high prevalence of depression in patients with diabetes and that this association may be modified by obesity or glycemic control. […] Patient data were derived from the second-year survey of a diabetes registry at Tenri Hospital, a regional tertiary care teaching hospital in Japan. […] 3,573 patients […] were included in the study. […] Overall, mean age, HbA1c level, and BMI were 66.0 years, 7.4% (57.8 mmol/mol), and 24.6 kg/m2, respectively. Patients with major depression tended to be relatively young […] and female […] with a high BMI […], high HbA1c levels […], and high hs-CRP levels […]; had more diabetic nephropathy […], required more insulin therapy […], and exercised less […]”.

“In conclusion, we observed that hs-CRP levels were associated with a high prevalence of major depression in patients with type 2 diabetes with a BMI of ≥25 kg/m2. […] In patients with a BMI of <25 kg/m2, no significant association was found between hs-CRP quintiles and major depression […] We did not observe a significant association between hs-CRP and major depression in either of HbA1c subgroups. […] Our results show that the association between hs-CRP and diabetes is valid even in an Asian population, but it might not be extended to nonobese subjects. […] several factors such as obesity and glycemic control may modify the association between inflammation and depression. […] Obesity is strongly associated with chronic inflammation.”

iv. A Novel Association Between Nondipping and Painful Diabetic Polyneuropathy.

“Sleep problems are common in painful diabetic polyneuropathy (PDPN) (1) and contribute to the effect of pain on quality of life. Nondipping (the absence of the nocturnal fall in blood pressure [BP]) is a recognized feature of diabetic cardiac autonomic neuropathy (CAN) and is attributed to the abnormal prevalence of nocturnal sympathetic activity (2). […] This study aimed to evaluate the relationship of the circadian pattern of BP with both neuropathic pain and pain-related sleep problems in PDPN […] Investigating the relationship between PDPN and BP circadian pattern, we found patients with PDPN exhibited impaired nocturnal decrease in BP compared with those without neuropathy, as well as higher nocturnal systolic BP than both those without DPN and with painless DPN. […] in multivariate analysis including comorbidities and most potential confounders, neuropathic pain was an independent determinant of ∆ in BP and nocturnal systolic BP.”

“PDPN could behave as a marker for the presence and severity of CAN. […] PDPN should increasingly be regarded as a condition of high cardiovascular risk.”

v. Reduced Testing Frequency for Glycated Hemoglobin, HbA1c, Is Associated With Deteriorating Diabetes Control.

I think a potentially important take-away from this paper, which they don’t really talk about, is that when you’re analyzing time series data in research contexts where the HbA1c variable is available at the individual level at some base frequency and you then encounter individuals for whom the HbA1c variable is unobserved in such a data set for some time periods/is not observed at the frequency you’d expect, such (implicit) missing values may not be missing at random (for more on these topics see e.g. this post). More specifically, in light of the findings of this paper I think it would make a lot of sense to default to an assumption of missing values being an indicator of worse-than-average metabolic control during the unobserved period of the time series in question when doing time-to-event analyses, especially in contexts where the values are missing for an extended period of time.

The authors of the paper consider metabolic control an outcome to be explained by the testing frequency. That’s one way to approach these things, but it’s not the only one and I think it’s also important to keep in mind that some patients also sometimes make a conscious decision not to show up for their appointments/tests; i.e. the testing frequency is not necessarily fully determined by the medical staff, although they of course have an important impact on this variable.

Some observations from the paper:

“We examined repeat HbA1c tests (400,497 tests in 79,409 patients, 2008–2011) processed by three U.K. clinical laboratories. We examined the relationship between retest interval and 1) percentage change in HbA1c and 2) proportion of cases showing a significant HbA1c rise. The effect of demographics factors on these findings was also explored. […] Figure 1 shows the relationship between repeat requesting interval (categorized in 1-month intervals) and percentage change in HbA1c concentration in the total data set. From 2 months onward, there was a direct relationship between retesting interval and control. A testing frequency of >6 months was associated with deterioration in control. The optimum testing frequency in order to maximize the downward trajectory in HbA1c between two tests was approximately four times per year. Our data also indicate that testing more frequently than 2 months has no benefit over testing every 2–4 months. Relative to the 2–3 month category, all other categories demonstrated statistically higher mean change in HbA1c (all P < 0.001). […] similar patterns were observed for each of the three centers, with the optimum interval to improvement in overall control at ∼3 months across all centers.”

“[I]n patients with poor control, the pattern was similar to that seen in the total group, except that 1) there was generally a more marked decrease or more modest increase in change of HbA1c concentration throughout and, consequently, 2) a downward trajectory in HbA1c was observed when the interval between tests was up to 8 months, rather than the 6 months as seen in the total group. In patients with a starting HbA1c of <6% (<42 mmol/mol), there was a generally linear relationship between interval and increase in HbA1c, with all intervals demonstrating an upward change in mean HbA1c. The intermediate group showed a similar pattern as those with a starting HbA1c of <6% (<42 mmol/mol), but with a steeper slope.”

“In order to examine the potential link between monitoring frequency and the risk of major deterioration in control, we then assessed the relationship between testing interval and proportion of patients demonstrating an increase in HbA1c beyond the normal biological and analytical variation in HbA1c […] Using this definition of significant increase as a ≥9.9% rise in subsequent HbA1c, our data show that the proportion of patients showing this magnitude of rise increased month to month, with increasing intervals between tests for each of the three centers. […] testing at 2–3-monthly intervals would, at a population level, result in a marked reduction in the proportion of cases demonstrating a significant increase compared with annual testing […] irrespective of the baseline HbA1c, there was a generally linear relationship between interval and the proportion demonstrating a significant increase in HbA1c, though the slope of this relationship increased with rising initial HbA1c.”

“Previous data from our and other groups on requesting patterns indicated that relatively few patients in general practice were tested annually (5,6). […] Our data indicate that for a HbA1c retest interval of more than 2 months, there was a direct relationship between retesting interval and control […], with a retest frequency of greater than 6 months being associated with deterioration in control. The data showed that for diabetic patients as a whole, the optimum repeat testing interval should be four times per year, particularly in those with poorer diabetes control (starting HbA1c >7% [≥53 mmol/mol]). […] The optimum retest interval across the three centers was similar, suggesting that our findings may be unrelated to clinical laboratory factors, local policies/protocols on testing, or patient demographics.”

It might be important to mention that there are important cross-country differences in terms of how often people with diabetes get HbA1c measured – I’m unsure of whether or not standards have changed since then, but at least in Denmark a specific treatment goal of the Danish Regions a few years ago was whether or not 95% of diabetics had had their HbA1c measured within the last year (here’s a relevant link to some stuff I wrote about related topics a while back).

Advertisements

October 2, 2017 Posted by | Cardiology, Diabetes, Immunology, Medicine, Neurology, Psychology, Statistics, Studies | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

Depression and Heart Disease (II)

Below I have added some more observations from the book, which I gave four stars on goodreads.

“A meta-analysis of twin (and family) studies estimated the heritability of adult MDD around 40% [16] and this estimate is strikingly stable across different countries [17, 18]. If measurement error due to unreliability is taken into account by analysing MDD assessed on two occasions, heritability estimates increase to 66% [19]. Twin studies in children further show that there is already a large genetic contribution to depressive symptoms in youth, with heritability estimates varying between 50% and 80% [20–22]. […] Cardiovascular research in twin samples has suggested a clear-cut genetic contribution to hypertension (h2 = 61%) [30], fatal stroke (h2 = 32%) [31] and CAD (h2 = 57% in males and 38% in females) [32]. […] A very important, and perhaps underestimated, source of pleiotropy in the association of MDD and CAD are the major behavioural risk factors for CAD: smoking and physical inactivity. These factors are sometimes considered ‘environmental’, but twin studies have shown that such behaviours have a strong genetic component [33–35]. Heritability estimates for [many] established risk factors [for CAD – e.g. BMI, smoking, physical inactivity – US] are 50% or higher in most adult twin samples and these estimates remain remarkably similar across the adult life span [41–43].”

“The crucial question is whether the genetic factors underlying MDD also play a role in CAD and CAD risk factors. To test for an overlap in the genetic factors, a bivariate extension of the structural equation model for twin data can be used [57]. […] If the depressive symptoms in a twin predict the IL-6 level in his/her co-twin, this can only be explained by an underlying factor that affects both depression and IL-6 levels and is shared by members of a family. If the prediction is much stronger in MZ than in DZ twins, this signals that the underlying factor is their shared genetic make-up, rather than their shared (family) environment. […] It is important to note clearly here that genetic correlations do not prove the existence of pleiotropy, because genes that influence MDD may, through causal effects of MDD on CAD risk, also become ‘CAD genes’. The absence of a genetic correlation, however, can be used to falsify the existence of genetic pleiotropy. For instance, the hypothesis that genetic pleiotropy explains part of the association between depressive symptoms and IL-6 requires the genetic correlation between these traits to be significantly different from zero. [Furthermore,] the genetic correlation should have a positive value. A negative genetic correlation would signal that genes that increase the risk for depression decrease the risk for higher IL-6 levels, which would go against the genetic pleiotropy hypothesis. […] Su et al. [26] […] tested pleiotropy as a possible source of the association of depressive symptoms with Il-6 in 188 twin pairs of the Vietnam Era Twin (VET) Registry. The genetic correlation between depressive symptoms and IL-6 was found to be positive and significant (RA = 0.22, p = 0.046)”

“For the association between MDD and physical inactivity, the dominant hypothesis has not been that MDD causes a reduction in regular exercise, but instead that regular exercise may act as a protective factor against mood disorders. […] we used the twin method to perform a rigorous test of this popular hypothesis [on] 8558 twins and their family members using their longitudinal data across 2-, 4-, 7-, 9- and 11-year follow-up periods. In spite of sufficient statistical power, we found only the genetic correlation to be significant (ranging between *0.16 and *0.44 for different symptom scales and different time-lags). The environmental correlations were essentially zero. This means that the environmental factors that cause a person to take up exercise do not cause lower anxiety or depressive symptoms in that person, currently or at any future time point. In contrast, the genetic factors that cause a person to take up exercise also cause lower anxiety or depressive symptoms in that person, at the present and all future time points. This pattern of results falsifies the causal hypothesis and leaves genetic pleiotropy as the most likely source for the association between exercise and lower levels of anxiety and depressive symptoms in the population at large. […] Taken together, [the] studies support the idea that genetic pleiotropy may be a factor contributing to the increased risk for CAD in subjects suffering from MDD or reporting high counts of depressive symptoms. The absence of environmental correlations in the presence of significant genetic correlations for a number of the CAD risk factors (CFR, cholesterol, inflammation and regular exercise) suggests that pleiotropy is the sole reason for the association between MDD and these CAD risk factors, whereas for other CAD risk factors (e.g. smoking) and CAD incidence itself, pleiotropy may coexist with causal effects.”

“By far the most tested polymorphism in psychiatric genetics is a 43-base pair insertion or deletion in the promoter region of the serotonin transporter gene (5HTT, renamed SLC6A4). About 55% of Caucasians carry a long allele (L) with 16 repeat units. The short allele (S, with 14 repeat units) of this length polymorphism repeat (LPR) reduces transcriptional efficiency, resulting in decreased serotonin transporter expression and function [83]. Because serotonin plays a key role in one of the major theories of MDD [84], and because the most prescribed antidepressants act directly on this transporter, 5HTT is an obvious candidate gene for this disorder. […] The dearth of studies attempting to associate the 5HTTLPR to MDD or related personality traits tells a revealing story about the fate of most candidate genes in psychiatric genetics. Many conflicting findings have been reported, and the two largest studies failed to link the 5HTTLPR to depressive symptoms or clinical MDD [85, 86]. Even at the level of reviews and meta-analyses, conflicting conclusions have been drawn about the role of this polymorphism in the development of MDD [87, 88]. The initially promising explanation for discrepant findings – potential interactive effects of the 5HTTLPR and stressful life events [89] – did not survive meta-analysis [90].”

“Across the board, overlooking the wealth of candidate gene studies on MDD, one is inclined to conclude that this approach has failed to unambiguously identify genetic variants involved in MDD […]. Hope is now focused on the newer GWA [genome wide association] approach. […] At the time of writing, only two GWA studies had been published on MDD [81, 95]. […] In theory, the strategy to identify potential pleiotropic genes in the MDD–CAD relationship is extremely straightforward. We simply select the genes that occur in the lists of confirmed genes from the GWA studies for both traits. In practice, this is hard to do, because genetics in psychiatry is clearly lagging behind genetics in cardiology and diabetes medicine. […] What is shown by the reviewed twin studies is that some genetic variants may influence MDD and CAD risk factors. This can occur through one of three mechanisms: (a) the genetic variants that increase the risk for MDD become part of the heritability of CAD through a causal effect of MDD on CAD risk factors (causality); (b) the genetic variants that increase the risk for CAD become part of the heritability of MDD through a direct causal effect of CAD on MDD (reverse causality); (c) the genetic variants influence shared risk factors that independently increase the risk for MDD as well as CAD (pleiotropy). I suggest that to fully explain the MDD–CAD association we need to be willing to be open to the possibility that these three mechanisms co-exist. Even in the presence of true pleiotropic effects, MDD may influence CAD risk factors, and having CAD in turn may worsen the course of MDD.”

“Patients with depression are more likely to exhibit several unhealthy behaviours or avoid other health-promoting ones than those without depression. […] Patients with depression are more likely to have sleep disturbances [6]. […] sleep deprivation has been linked with obesity, diabetes and the metabolic syndrome [13]. […] Physical inactivity and depression display a complex, bidirectional relationship. Depression leads to physical inactivity and physical inactivity exacerbates depression [19]. […] smoking rates among those with depression are about twice that of the general population [29]. […] Poor attention to self-care is often a problem among those with major depressive disorder. In the most severe cases, those with depression may become inattentive to their personal hygiene. One aspect of this relationship that deserves special attention with respect to cardiovascular disease is the association of depression and periodontal disease. […] depression is associated with poor adherence to medical treatment regimens in many chronic illnesses, including heart disease. […] There is some evidence that among patients with an acute coronary syndrome, improvement in depression is associated with improvement in adherence. […] Individuals with depression are often socially withdrawn or isolated. It has been shown that patients with heart disease who are depressed have less social support [64], and that social isolation or poor social support is associated with increased mortality in heart disease patients [65–68]. […] [C]linicians who make recommendations to patients recovering from a heart attack should be aware that low levels of social support and social isolation are particularly common among depressed individuals and that high levels of social support appear to protect patients from some of the negative effects of depression [78].”

“Self-efficacy describes an individual’s self-confidence in his/her ability to accomplish a particular task or behaviour. Self-efficacy is an important construct to consider when one examines the psychological mechanisms linking depression and heart disease, since it influences an individual’s engagement in behaviour and lifestyle changes that may be critical to improving cardiovascular risk. Many studies on individuals with chronic illness show that depression is often associated with low self-efficacy [95–97]. […] Low self-efficacy is associated with poor adherence behaviour in patients with heart failure [101]. […] Much of the interest in self-efficacy comes from the fact that it is modifiable. Self-efficacy-enhancing interventions have been shown to improve cardiac patients’ self-efficacy and thereby improve cardiac health outcomes [102]. […] One problem with targeting self-efficacy in depressed heart disease patients is [however] that depressive symptoms reduce the effects of self-efficacy-enhancing interventions [105, 106].”

“Taken together, [the] SADHART and ENRICHD [studies] suggest, but do not prove, that antidepressant drug therapy in general, and SSRI treatment in particular, improve cardiovascular outcomes in depressed post-acute coronary syndrome (ACS) patients. […] even large epidemiological studies of depression and antidepressant treatment are not usually informative, because they confound the effects of depression and antidepressant treatment. […] However, there is one Finnish cohort study in which all subjects […] were followed up through a nationwide computerised database [17]. The purpose of this study was not to examine the relationship between depression and cardiac mortality, but rather to look at the relationship between antidepressant use and suicide. […] unexpectedly, ‘antidepressant use, and especially SSRI use, was associated with a marked reduction in total mortality (=49%, p < 0.001), mostly attributable to a decrease in cardiovascular deaths’. The study involved 15 390 patients with a mean follow-up of 3.4 years […] One of the marked differences between the SSRIs and the earlier tricyclic antidepressants is that the SSRIs do not cause cardiac death in overdose as the tricyclics do [41]. There has been literature that suggested that tricyclics even at therapeutic doses could be cardiotoxic and more problematic than SSRIs [42, 43]. What has been surprising is that both in the clinical trial data from ENRICHD and the epidemiological data from Finland, tricyclic treatment has also been associated with a decreased risk of mortality. […] Given that SSRI treatment of depression in the post-ACS period is safe, effective in reducing depressed mood, able to improve health behaviours and may reduce subsequent cardiac morbidity and mortality, it would seem obvious that treating depression is strongly indicated. However, the vast majority of post-ACS patients will not see a psychiatrically trained professional and many cases are not identified [33].”

“That depression is associated with cardiovascular morbidity and mortality is no longer open to question. Similarly, there is no question that the risk of morbidity and mortality increases with increasing severity of depression. Questions remain about the mechanisms that underlie this association, whether all types of depression carry the same degree of risk and to what degree treating depression reduces that risk. There is no question that the benefits of treating depression associated with coronary artery disease far outweigh the risks.”

“Two competing trends are emerging in research on psychotherapy for depression in cardiac patients. First, the few rigorous RCTs that have been conducted so far have shown that even the most efficacious of the current generation of interventions produce relatively modest outcomes. […] Second, there is a growing recognition that, even if an intervention is highly efficacious, it may be difficult to translate into clinical practice if it requires intensive or extensive contacts with a highly trained, experienced, clinically sophisticated psychotherapist. It can even be difficult to implement such interventions in the setting of carefully controlled, randomised efficacy trials. Consequently, there are efforts to develop simpler, more efficient interventions that can be delivered by a wider variety of interventionists. […] Although much more work remains to be done in this area, enough is already known about psychotherapy for comorbid depression in heart disease to suggest that a higher priority should be placed on translation of this research into clinical practice. In many cases, cardiac patients do not receive any treatment for their depression.”

August 14, 2017 Posted by | Books, Cardiology, Diabetes, Genetics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Depression and Heart Disease (I)

I’m currently reading this book. It’s a great book, with lots of interesting observations.

Below I’ve added some quotes from the book.

“Frasure-Smith et al. [1] demonstrated that patients diagnosed with depression post MI [myocardial infarction, US] were more than five times more likely to die from cardiac causes by 6 months than those without major depression. At 18 months, cardiac mortality had reached 20% in patients with major depression, compared with only 3% in non-depressed patients [5]. Recent work has confirmed and extended these findings. A meta-analysis of 22 studies of post-MI subjects found that post-MI depression was associated with a 2.0–2.5 increased risk of negative cardiovascular outcomes [6]. Another meta-analysis examining 20 studies of subjects with MI, coronary artery bypass graft (CABG), angioplasty or angiographically documented CAD found a twofold increased risk of death among depressed compared with non-depressed patients [7]. Though studies included in these meta-analyses had substantial methodological variability, the overall results were quite similar [8].”

“Blumenthal et al. [31] published the largest cohort study (N = 817) to date on depression in patients undergoing CABG and measured depression scores, using the CES-D, before and at 6 months after CABG. Of those patients, 26% had minor depression (CES-D score 16–26) and 12% had moderate to severe depression (CES-D score =27). Over a mean follow-up of 5.2 years, the risk of death, compared with those without depression, was 2.4 (HR adjusted; 95% CI 1.4, 4.0) in patients with moderate to severe depression and 2.2 (95% CI 1.2, 4.2) in those whose depression persisted from baseline to follow-up at 6 months. This is one of the few studies that found a dose response (in terms of severity and duration) between depression and death in CABG in particular and in CAD in general.”

“Of the patients with known CAD but no recent MI, 12–23% have major depressive disorder by DSM-III or DSM-IV criteria [20, 21]. Two studies have examined the prognostic association of depression in patients whose CAD was confirmed by angiography. […] In [Carney et al.], a diagnosis of major depression by DSM-III criteria was the best predictor of cardiac events (MI, bypass surgery or death) at 1 year, more potent than other clinical risk factors such as impaired left ventricular function, severity of coronary disease and smoking among the 52 patients. The relative risk of a cardiac event was 2.2 times higher in patients with major depression than those with no depression.[…] Barefoot et al. [23] provided a larger sample size and longer follow-up duration in their study of 1250 patients who had undergone their first angiogram. […] Compared with non-depressed patients, those who were moderately to severely depressed had 69% higher odds of cardiac death and 78% higher odds of all-cause mortality. The mildly depressed had a 38% higher risk of cardiac death and a 57% higher risk of all-cause mortality than non-depressed patients.”

“Ford et al. [43] prospectively followed all male medical students who entered the Johns Hopkins Medical School from 1948 to 1964. At entry, the participants completed questionnaires about their personal and family history, health status and health behaviour, and underwent a standard medical examination. The cohort was then followed after graduation by mailed, annual questionnaires. The incidence of depression in this study was based on the mailed surveys […] 1190 participants [were included in the] analysis. The cumulative incidence of clinical depression in this population at 40 years of follow-up was 12%, with no evidence of a temporal change in the incidence. […] In unadjusted analysis, clinical depression was associated with an almost twofold higher risk of subsequent CAD. This association remained after adjustment for time-dependent covariates […]. The relative risk ratio for CAD development with versus without clinical depression was 2.12 (95% CI 1.24, 3.63), as was their relative risk ratio for future MI (95% CI 1.11, 4.06), after adjustment for age, baseline serum cholesterol level, parental MI, physical activity, time-dependent smoking, hypertension and diabetes. The median time from the first episode of clinical depression to first CAD event was 15 years, with a range of 1–44 years.”

“In the Women’s Ischaemia Syndrome Evaluation (WISE) study, 505 women referred for coronary angiography were followed for a mean of 4.9 years and completed the BDI [46]. Significantly increased mortality and cardiovascular events were found among women with elevated BDI scores, even after adjustment for age, cholesterol, stenosis score on angiography, smoking, diabetes, education, hyper-tension and body mass index (RR 3.1; 95% CI 1.5, 6.3). […] Further compelling evidence comes from a meta-analysis of 28 studies comprising almost 80 000 subjects [47], which demonstrated that, despite heterogeneity and differences in study quality, depression was consistently associated with increased risk of cardiovascular diseases in general, including stroke.”

“The preponderance of evidence strongly suggests that depression is a risk factor for CAD [coronary artery disease, US] development. […] In summary, it is fair to conclude that depression plays a significant role in CAD development, independent of conventional risk factors, and its adverse impact endures over time. The impact of depression on the risk of MI is probably similar to that of smoking [52]. […] Results of longitudinal cohort studies suggest that depression occurs before the onset of clinically significant CAD […] Recent brain imaging studies have indicated that lesions resulting from cerebrovascular insufficiency may lead to clinical depression [54, 55]. Depression may be a clinical manifestation of atherosclerotic lesions in certain areas of the brain that cause circulatory deficits. The depression then exacerbates the onset of CAD. The exact aetiological mechanism of depression and CAD development remains to be clarified.”

“Rutledge et al. [65] conducted a meta-analysis in 2006 in order to better understand the prevalence of depression among patients with CHF and the magnitude of the relationship between depression and clinical outcomes in the CHF population. They found that clinically significant depression was present in 21.5% of CHF patients, varying by the use of questionnaires versus diagnostic interview (33.6% and 19.3%, respectively). The combined results suggested higher rates of death and secondary events (RR 2.1; 95% CI 1.7, 2.6), and trends toward increased health care use and higher rates of hospitalisation and emergency room visits among depressed patients.”

“In the past 15 years, evidence has been provided that physically healthy subjects who suffer from depression are at increased risk for cardiovascular morbidity and mortality [1, 2], and that the occurrence of depression in patients with either unstable angina [3] or myocardial infarction (MI) [4] increases the risk for subsequent cardiac death. Moreover, epidemiological studies have proved that cardiovascular disease is a risk factor for depression, since the prevalence of depression in individuals with a recent MI or with coronary artery disease (CAD) or congestive heart failure has been found to be significantly higher than in the general population [5, 6]. […] findings suggest a bidirectional association between depression and cardiovascular disease. The pathophysiological mechanisms underlying this association are, at present, largely unclear, but several candidate mechanisms have been proposed.”

“Autonomic nervous system dysregulation is one of the most plausible candidate mechanisms underlying the relationship between depression and ischaemic heart disease, since changes of autonomic tone have been detected in both depression and cardiovascular disease [7], and autonomic imbalance […] has been found to lower the threshold for ventricular tachycardia, ventricular fibrillation and sudden cardiac death in patients with CAD [8, 9]. […] Imbalance between prothrombotic and antithrombotic mechanisms and endothelial dysfunction have [also] been suggested to contribute to the increased risk of cardiac events in both medically well patients with depression and depressed patients with CAD. Depression has been consistently associated with enhanced platelet activation […] evidence has accumulated that selective serotonin reuptake inhibitors (SSRIs) reduce platelet hyperreactivity and hyperaggregation of depressed patients [39, 40] and reduce the release of the platelet/endothelial biomarkers ß-thromboglobulin, P-selectin and E-selectin in depressed patients with acute CAD [41]. This may explain the efficacy of SSRIs in reducing the risk of mortality in depressed patients with CAD [42–44].”

“[S]everal studies have shown that reduced endothelium-dependent flow-mediated vasodilatation […] occurs in depressed adults with or without CAD [48–50]. Atherosclerosis with subsequent plaque rupture and thrombosis is the main determinant of ischaemic cardiovascular events, and atherosclerosis itself is now recognised to be fundamentally an inflammatory disease [56]. Since activation of inflammatory processes is common to both depression and cardiovascular disease, it would be reasonable to argue that the link between depression and ischaemic heart disease might be mediated by inflammation. Evidence has been provided that major depression is associated with a significant increase in circulating levels of both pro-inflammatory cytokines, such as IL-6 and TNF-a, and inflammatory acute phase proteins, especially the C-reactive protein (CRP) [57, 58], and that antidepressant treatment is able to normalise CRP levels irrespective of whether or not patients are clinically improved [59]. […] Vaccarino et al. [79] assessed specifically whether inflammation is the mechanism linking depression to ischaemic cardiac events and found that, in women with suspected coronary ischaemia, depression was associated with increased circulating levels of CRP and IL-6 and was a strong predictor of ischaemic cardiac events”

“Major depression has been consistently associated with hyperactivity of the HPA axis, with a consequent overstimulation of the sympathetic nervous system, which in turn results in increased circulating catecholamine levels and enhanced serum cortisol concentrations [68–70]. This may cause an imbalance in sympathetic and parasympathetic activity, which results in elevated heart rate and blood pressure, reduced HRV [heart rate variability], disruption of ventricular electrophysiology with increased risk of ventricular arrhythmias as well as an increased risk of atherosclerotic plaque rupture and acute coronary thrombosis. […] In addition, glucocorticoids mobilise free fatty acids, causing endothelial inflammation and excessive clotting, and are associated with hypertension, hypercholesterolaemia and glucose dysregulation [88, 89], which are risk factors for CAD.”

“Most of the literature on [the] comorbidity [between major depressive disorder (MDD) and coronary artery disease (CAD), US] has tended to favour the hypothesis of a causal effect of MDD on CAD, but reversed causality has also been suggested to contribute. Patients with severe CAD at baseline, and consequently a worse prognosis, may simply be more prone to report mood disturbances than less severely ill patients. Furthermore, in pre-morbid populations, insipid atherosclerosis in cerebral vessels may cause depressive symptoms before the onset of actual cardiac or cerebrovascular events, a variant of reverse causality known as the ‘vascular depression’ hypothesis [2]. To resolve causality, comorbidity between MDD and CAD has been addressed in longitudinal designs. Most prospective studies reported that clinical depression or depressive symptoms at baseline predicted higher incidence of heart disease at follow-up [1], which seems to favour the hypothesis of causal effects of MDD. We need to remind ourselves, however […] [that] [p]rospective associations do not necessarily equate causation. Higher incidence of CAD in depressed individuals may reflect the operation of common underlying factors on MDD and CAD that become manifest in mental health at an earlier stage than in cardiac health. […] [T]he association between MDD and CAD may be due to underlying genetic factors that lead to increased symptoms of anxiety and depression, but may also independently influence the atherosclerotic process. This phenomenon, where low-level biological variation has effects on multiple complex traits at the organ and behavioural level, is called genetic ‘pleiotropy’. If present in a time-lagged form, that is if genetic effects on MDD risk precede effects of the same genetic variants on CAD risk, this phenomenon can cause longitudinal correlations that mimic a causal effect of MDD.”

 

August 12, 2017 Posted by | Books, Cardiology, Genetics, Medicine, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

A few diabetes papers of interest

i. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

“Modest cognitive dysfunction is consistently reported in children and young adults with type 1 diabetes (T1D) (1). Mental efficiency, psychomotor speed, executive functioning, and intelligence quotient appear to be most affected (2); studies report effect sizes between 0.2 and 0.5 (small to modest) in children and adolescents (3) and between 0.4 and 0.8 (modest to large) in adults (2). Whether effect sizes continue to increase as those with T1D age, however, remains unknown.

A key issue not yet addressed is whether aging individuals with T1D have an increased risk of manifesting “clinically relevant cognitive impairment,” defined by comparing individual cognitive test scores to demographically appropriate normative means, as opposed to the more commonly investigated “cognitive dysfunction,” or between-group differences in cognitive test scores. Unlike the extensive literature examining cognitive impairment in type 2 diabetes, we know of only one prior study examining cognitive impairment in T1D (4). This early study reported a higher rate of clinically relevant cognitive impairment among children (10–18 years of age) diagnosed before compared with after age 6 years (24% vs. 6%, respectively) or a non-T1D cohort (6%).”

“This study tests the hypothesis that childhood-onset T1D is associated with an increased risk of developing clinically relevant cognitive impairment detectable by middle age. We compared cognitive test results between adults with and without T1D and used demographically appropriate published norms (1012) to determine whether participants met criteria for impairment for each test; aging and dementia studies have selected a score ≥1.5 SD worse than the norm on that test, corresponding to performance at or below the seventh percentile (13).”

“During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988.  […] The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education.”

“Having T1D was the only factor significantly associated with the between-group difference in clinically relevant cognitive impairment in our sample. Traditional risk factors for age-related cognitive impairment, in particular older age and high blood pressure (24), were not related to the between-group difference we observed. […] Similar to previous studies of younger adults with T1D (14,26), we found no relationship between the number of severe hypoglycemic episodes and cognitive impairment. Rather, we found that chronic hyperglycemia, via its associated vascular and metabolic changes, may have triggered structural changes in the brain that disrupt normal cognitive function.”

Just to be absolutely clear about these results: The type 1 diabetics they recruited in this study were on average not yet fifty years old, yet more than one in four of them were cognitively impaired to a clinically relevant degree. This is a huge effect. As they note later in the paper:

“Unlike previous reports of mild/modest cognitive dysfunction in young adults with T1D (1,2), we detected clinically relevant cognitive impairment in 28% of our middle-aged participants with T1D. This prevalence rate in our T1D cohort is comparable to the prevalence of mild cognitive impairment typically reported among community-dwelling adults aged 85 years and older (29%) (20).”

The type 1 diabetics included in the study had had diabetes for roughly a decade more than I have. And the number of cognitively impaired individuals in that sample corresponds roughly to what you find when you test random 85+ year-olds. Having type 1 diabetes is not good for your brain.

ii. Comment on Nunley et al. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

This one is a short comment to the above paper, below I’ve quoted ‘the meat’ of the comment:

“While the […] study provides us with important insights regarding cognitive impairment in adults with type 1 diabetes, we regret that depression has not been taken into account. A systematic review and meta-analysis published in 2014 identified significant objective cognitive impairment in adults and adolescents with depression regarding executive functioning, memory, and attention relative to control subjects (2). Moreover, depression is two times more common in adults with diabetes compared with those without this condition, regardless of type of diabetes (3). There is even evidence that the co-occurrence of diabetes and depression leads to additional health risks such as increased mortality and dementia (3,4); this might well apply to cognitive impairment as well. Furthermore, in people with diabetes, the presence of depression has been associated with the development of diabetes complications, such as retinopathy, and higher HbA1c values (3). These are exactly the diabetes-specific correlates that Nunley et al. (1) found.”

“We believe it is a missed opportunity that Nunley et al. (1) mainly focused on biological variables, such as hyperglycemia and microvascular disease, and did not take into account an emotional disorder widely represented among people with diabetes and closely linked to cognitive impairment. Even though severe or chronic cases of depression are likely to have been excluded in the group without type 1 diabetes based on exclusion criteria (1), data on the presence of depression (either measured through a diagnostic interview or by using a validated screening questionnaire) could have helped to interpret the present findings. […] Determining the role of depression in the relationship between cognitive impairment and type 1 diabetes is of significant importance. Treatment of depression might improve cognitive impairment both directly by alleviating cognitive depression symptoms and indirectly by improving treatment nonadherence and glycemic control, consequently lowering the risk of developing complications.”

iii. Prevalence of Diabetes and Diabetic Nephropathy in a Large U.S. Commercially Insured Pediatric Population, 2002–2013.

“[W]e identified 96,171 pediatric patients with diabetes and 3,161 pediatric patients with diabetic nephropathy during 2002–2013. We estimated prevalence of pediatric diabetes overall, by diabetes type, age, and sex, and prevalence of pediatric diabetic nephropathy overall, by age, sex, and diabetes type.”

“Although type 1 diabetes accounts for a majority of childhood and adolescent diabetes, type 2 diabetes is becoming more common with the increasing rate of childhood obesity and it is estimated that up to 45% of all new patients with diabetes in this age-group have type 2 diabetes (1,2). With the rising prevalence of diabetes in children, a rise in diabetes-related complications, such as nephropathy, is anticipated. Moreover, data suggest that the development of clinical macrovascular complications, neuropathy, and nephropathy may be especially rapid among patients with young-onset type 2 diabetes (age of onset <40 years) (36). However, the natural history of young patients with type 2 diabetes and resulting complications has not been well studied.”

I’m always interested in the identification mechanisms applied in papers like this one, and I’m a little confused about the high number of patients without prescriptions (almost one-third of patients); I sort of assume these patients do take (/are given) prescription drugs, but get them from sources not available to the researchers (parents get prescriptions for the antidiabetic drugs, and the researchers don’t have access to these data? Something like this..) but this is a bit unclear. The mechanism they employ in the paper is not perfect (no mechanism is), but it probably works:

“Patients who had one or more prescription(s) for insulin and no prescriptions for another antidiabetes medication were classified as having type 1 diabetes, while those who filled prescriptions for noninsulin antidiabetes medications were considered to have type 2 diabetes.”

When covering limitations of the paper, they observe incidentally in this context that:

“Klingensmith et al. (31) recently reported that in the initial month after diagnosis of type 2 diabetes around 30% of patients were treated with insulin only. Thus, we may have misclassified a small proportion of type 2 cases as type 1 diabetes or vice versa. Despite this, we found that 9% of patients had onset of type 2 diabetes at age <10 years, consistent with the findings of Klingensmith et al. (8%), but higher than reported by the SEARCH for Diabetes in Youth study (<3%) (31,32).”

Some more observations from the paper:

“There were 149,223 patients aged <18 years at first diagnosis of diabetes in the CCE database from 2002 through 2013. […] Type 1 diabetes accounted for a majority of the pediatric patients with diabetes (79%). Among these, 53% were male and 53% were aged 12 to <18 years at onset, while among patients with type 2 diabetes, 60% were female and 79% were aged 12 to <18 years at onset.”

“The overall annual prevalence of all diabetes increased from 1.86 to 2.82 per 1,000 during years 2002–2013; it increased on average by 9.5% per year from 2002 to 2006 and slowly increased by 0.6% after that […] The prevalence of type 1 diabetes increased from 1.48 to 2.32 per 1,000 during the study period (average increase of 8.5% per year from 2002 to 2006 and 1.4% after that; both P values <0.05). The prevalence of type 2 diabetes increased from 0.38 to 0.67 per 1,000 during 2002 through 2006 (average increase of 13.3% per year; P < 0.05) and then dropped from 0.56 to 0.49 per 1,000 during 2007 through 2013 (average decrease of 2.7% per year; P < 0.05). […] Prevalence of any diabetes increased by age, with the highest prevalence in patients aged 12 to <18 years (ranging from 3.47 to 5.71 per 1,000 from 2002 through 2013).” […] The annual prevalence of diabetes increased over the study period mainly because of increases in type 1 diabetes.”

“Dabelea et al. (8) reported, based on data from the SEARCH for Diabetes in Youth study, that the annual prevalence of type 1 diabetes increased from 1.48 to 1.93 per 1,000 and from 0.34 to 0.46 per 1,000 for type 2 diabetes from 2001 to 2009 in U.S. youth. In our study, the annual prevalence of type 1 diabetes was 1.48 per 1,000 in 2002 and 2.10 per 1,000 in 2009, which is close to their reported prevalence.”

“We identified 3,161 diabetic nephropathy cases. Among these, 1,509 cases (47.7%) were of specific diabetic nephropathy and 2,253 (71.3%) were classified as probable cases. […] The annual prevalence of diabetic nephropathy in pediatric patients with diabetes increased from 1.16 to 3.44% between 2002 and 2013; it increased by on average 25.7% per year from 2002 to 2005 and slowly increased by 4.6% after that (both P values <0.05).”

Do note that the relationship between nephropathy prevalence and diabetes prevalence is complicated and that you cannot just explain an increase in the prevalence of nephropathy over time easily by simply referring to an increased prevalence of diabetes during the same time period. This would in fact be a very wrong thing to do, in part but not only on account of the data structure employed in this study. One problem which is probably easy to understand is that if more children got diabetes but the same proportion of those new diabetics got nephropathy, the diabetes prevalence would go up but the diabetic nephropathy prevalence would remain fixed; when you calculate the diabetic nephropathy prevalence you implicitly condition on diabetes status. But this just scratches the surface of the issues you encounter when you try to link these variables, because the relationship between the two variables is complicated; there’s an age pattern to diabetes risk, with risk (incidence) increasing with age (up to a point, after which it falls – in most samples I’ve seen in the past peak incidence in pediatric populations is well below the age of 18). However diabetes prevalence increases monotonously with age as long as the age-specific death rate of diabetics is lower than the age-specific incidence, because diabetes is chronic, and then on top of that you have nephropathy-related variables, which display diabetes-related duration-dependence (meaning that although nephropathy risk is also increasing with age when you look at that variable in isolation, that age-risk relationship is confounded by diabetes duration – a type 1 diabetic at the age of 12 who’s had diabetes for 10 years has a higher risk of nephropathy than a 16-year old who developed diabetes the year before). When a newly diagnosed pediatric patient is included in the diabetes sample here this will actually decrease the nephropathy prevalence in the short run, but not in the long run, assuming no changes in diabetes treatment outcomes over time. This is because the probability that that individual has diabetes-related kidney problems as a newly diagnosed child is zero, so he or she will unquestionably only contribute to the denominator during the first years of illness (the situation in the middle-aged type 2 context is different; here you do sometimes have newly-diagnosed patients who have developed complications already). This is one reason why it would be quite wrong to say that increased diabetes prevalence in this sample is the reason why diabetic nephropathy is increasing as well. Unless the time period you look at is very long (e.g. you have a setting where you follow all individuals with a diagnosis until the age of 18), the impact of increasing prevalence of one condition may well be expected to have a negative impact on the estimated risk of associated conditions, if those associated conditions display duration-dependence (which all major diabetes complications do). A second factor supporting a default assumption of increasing incidence of diabetes leading to an expected decreasing rate of diabetes-related complications is of course the fact that treatment options have tended to increase over time, and especially if you take a long view (look back 30-40 years) the increase in treatment options and improved medical technology have lead to improved metabolic control and better outcomes.

That both variables grew over time might be taken to indicate that both more children got diabetes and that a larger proportion of this increased number of children with diabetes developed kidney problems, but this stuff is a lot more complicated than it might look and it’s in particular important to keep in mind that, say, the 2005 sample and the 2010 sample do not include the same individuals, although there’ll of course be some overlap; in age-stratified samples like this you always have some level of implicit continuous replacement, with newly diagnosed patients entering and replacing the 18-year olds who leave the sample. As long as prevalence is constant over time, associated outcome variables may be reasonably easy to interpret, but when you have dynamic samples as well as increasing prevalence over time it gets difficult to say much with any degree of certainty unless you crunch the numbers in a lot of detail (and it might be difficult even if you do that). A factor I didn’t mention above but which is of course also relevant is that you need to be careful about how to interpret prevalence rates when you look at complications with high mortality rates (and late-stage diabetic nephropathy is indeed a complication with high mortality); in such a situation improvements in treatment outcomes may have large effects on prevalence rates but no effect on incidence. Increased prevalence is not always bad news, sometimes it is good news indeed. Gleevec substantially increased the prevalence of CML.

In terms of the prevalence-outcomes (/complication risk) connection, there are also in my opinion reasons to assume that there may be multiple causal pathways between prevalence and outcomes. For example a very low prevalence of a condition in a given area may mean that fewer specialists are educated to take care of these patients than would be the case for an area with a higher prevalence, and this may translate into a more poorly developed care infrastructure. Greatly increasing prevalence may on the other hand lead to a lower level of care for all patients with the illness, not just the newly diagnosed ones, due to binding budget constraints and care rationing. And why might you have changes in prevalence; might they not sometimes rather be related to changes in diagnostic practices, rather than changes in the True* prevalence? If that’s the case, you might not be comparing apples to apples when you’re comparing the evolving complication rates. There are in my opinion many reasons to believe that the relationship between chronic conditions and the complication rates of these conditions is far from simple to model.

All this said, kidney problems in children with diabetes is still rare, compared to the numbers you see when you look at adult samples with longer diabetes duration. It’s also worth distinguishing between microalbuminuria and overt nephropathy; children rarely proceed to develop diabetes-related kidney failure, although poor metabolic control may mean that they do develop this complication later, in early adulthood. As they note in the paper:

“It has been reported that overt diabetic nephropathy and kidney failure caused by either type 1 or type 2 diabetes are uncommon during childhood or adolescence (24). In this study, the annual prevalence of diabetic nephropathy for all cases ranged from 1.16 to 3.44% in pediatric patients with diabetes and was extremely low in the whole pediatric population (range 2.15 to 9.70 per 100,000), confirming that diabetic nephropathy is a very uncommon condition in youth aged <18 years. We observed that the prevalence of diabetic nephropathy increased in both specific and unspecific cases before 2006, with a leveling off of the specific nephropathy cases after 2005, while the unspecific cases continued to increase.”

iv. Adherence to Oral Glucose-Lowering Therapies and Associations With 1-Year HbA1c: A Retrospective Cohort Analysis in a Large Primary Care Database.

“Between a third and a half of medicines prescribed for type 2 diabetes (T2DM), a condition in which multiple medications are used to control cardiovascular risk factors and blood glucose (1,2), are not taken as prescribed (36). However, estimates vary widely depending on the population being studied and the way in which adherence to recommended treatment is defined.”

“A number of previous studies have used retrospective databases of electronic health records to examine factors that might predict adherence. A recent large cohort database examined overall adherence to oral therapy for T2DM, taking into account changes of therapy. It concluded that overall adherence was 69%, with individuals newly started on treatment being significantly less likely to adhere (19).”

“The impact of continuing to take glucose-lowering medicines intermittently, but not as recommended, is unknown. Medication possession (expressed as a ratio of actual possession to expected possession), derived from prescribing records, has been identified as a valid adherence measure for people with diabetes (7). Previous studies have been limited to small populations in managed-care systems in the U.S. and focused on metformin and sulfonylurea oral glucose-lowering treatments (8,9). Further studies need to be carried out in larger groups of people that are more representative of the general population.

The Clinical Practice Research Database (CPRD) is a long established repository of routine clinical data from more than 13 million patients registered with primary care services in England. […] The Genetics of Diabetes and Audit Research Tayside Study (GoDARTS) database is derived from integrated health records in Scotland with primary care, pharmacy, and hospital data on 9,400 patients with diabetes. […] We conducted a retrospective cohort study using [these databases] to examine the prevalence of nonadherence to treatment for type 2 diabetes and investigate its potential impact on HbA1c reduction stratified by type of glucose-lowering medication.”

“In CPRD and GoDARTS, 13% and 15% of patients, respectively, were nonadherent. Proportions of nonadherent patients varied by the oral glucose-lowering treatment prescribed (range 8.6% [thiazolidinedione] to 18.8% [metformin]). Nonadherent, compared with adherent, patients had a smaller HbA1c reduction (0.4% [4.4 mmol/mol] and 0.46% [5.0 mmol/mol] for CPRD and GoDARTs, respectively). Difference in HbA1c response for adherent compared with nonadherent patients varied by drug (range 0.38% [4.1 mmol/mol] to 0.75% [8.2 mmol/mol] lower in adherent group). Decreasing levels of adherence were consistently associated with a smaller reduction in HbA1c.”

“These findings show an association between adherence to oral glucose-lowering treatment, measured by the proportion of medication obtained on prescription over 1 year, and the corresponding decrement in HbA1c, in a population of patients newly starting treatment and continuing to collect prescriptions. The association is consistent across all commonly used oral glucose-lowering therapies, and the findings are consistent between the two data sets examined, CPRD and GoDARTS. Nonadherent patients, taking on average <80% of the intended medication, had about half the expected reduction in HbA1c. […] Reduced medication adherence for commonly used glucose-lowering therapies among patients persisting with treatment is associated with smaller HbA1c reductions compared with those taking treatment as recommended. Differences observed in HbA1c responses to glucose-lowering treatments may be explained in part by their intermittent use.”

“Low medication adherence is related to increased mortality (20). The mean difference in HbA1c between patients with MPR <80% and ≥80% is between 0.37% and 0.55% (4 mmol/mol and 6 mmol/mol), equivalent to up to a 10% reduction in death or an 18% reduction in diabetes complications (21).”

v. Health Care Transition in Young Adults With Type 1 Diabetes: Perspectives of Adult Endocrinologists in the U.S.

“Empiric data are limited on best practices in transition care, especially in the U.S. (10,1316). Prior research, largely from the patient perspective, has highlighted challenges in the transition process, including gaps in care (13,1719); suboptimal pediatric transition preparation (13,20); increased post-transition hospitalizations (21); and patient dissatisfaction with the transition experience (13,1719). […] Young adults with type 1 diabetes transitioning from pediatric to adult care are at risk for adverse outcomes. Our objective was to describe experiences, resources, and barriers reported by a national sample of adult endocrinologists receiving and caring for young adults with type 1 diabetes.”

“We received responses from 536 of 4,214 endocrinologists (response rate 13%); 418 surveys met the eligibility criteria. Respondents (57% male, 79% Caucasian) represented 47 states; 64% had been practicing >10 years and 42% worked at an academic center. Only 36% of respondents reported often/always reviewing pediatric records and 11% reported receiving summaries for transitioning young adults with type 1 diabetes, although >70% felt that these activities were important for patient care.”

“A number of studies document deficiencies in provider hand-offs across other chronic conditions and point to the broader relevance of our findings. For example, in two studies of inflammatory bowel disease, adult gastroenterologists reported inadequacies in young adult transition preparation (31) and infrequent receipt of medical histories from pediatric providers (32). In a study of adult specialists caring for young adults with a variety of chronic diseases (33), more than half reported that they had no contact with the pediatric specialists.

Importantly, more than half of the endocrinologists in our study reported a need for increased access to mental health referrals for young adult patients with type 1 diabetes, particularly in nonacademic settings. Report of barriers to care was highest for patient scenarios involving mental health issues, and endocrinologists without easy access to mental health referrals were significantly more likely to report barriers to diabetes management for young adults with psychiatric comorbidities such as depression, substance abuse, and eating disorders.”

“Prior research (34,35) has uncovered the lack of mental health resources in diabetes care. In the large cross-national Diabetes Attitudes, Wishes and Needs (DAWN) study (36) […] diabetes providers often reported not having the resources to manage mental health problems; half of specialist diabetes physicians felt unable to provide psychiatric support for patients and one-third did not have ready access to outside expertise in emotional or psychiatric matters. Our results, which resonate with the DAWN findings, are particularly concerning in light of the vulnerability of young adults with type 1 diabetes for adverse medical and mental health outcomes (4,34,37,38). […] In a recent report from the Mental Health Issues of Diabetes conference (35), which focused on type 1 diabetes, a major observation included the lack of trained mental health professionals, both in academic centers and the community, who are knowledgeable about the mental health issues germane to diabetes.”

August 3, 2017 Posted by | Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Psychology, Statistics, Studies | Leave a comment

Beyond Significance Testing (III)

There are many ways to misinterpret significance tests, and this book spends quite a bit of time and effort on these kinds of issues. I decided to include in this post quite a few quotes from chapter 4 of the book, which deals with these topics in some detail. I also included some notes on effect sizes.

“[P] < .05 means that the likelihood of the data or results even more extreme given random sampling under the null hypothesis is < .05, assuming that all distributional requirements of the test statistic are satisfied and there are no other sources of error variance. […] the odds-against-chance fallacy […] [is] the false belief that p indicates the probability that a result happened by sampling error; thus, p < .05 says that there is less than a 5% likelihood that a particular finding is due to chance. There is a related misconception i call the filter myth, which says that p values sort results into two categories, those that are a result of “chance” (H0 not rejected) and others that are due to “real” effects (H0 rejected). These beliefs are wrong […] When p is calculated, it is already assumed that H0 is true, so the probability that sampling error is the only explanation is already taken to be 1.00. It is thus illogical to view p as measuring the likelihood of sampling error. […] There is no such thing as a statistical technique that determines the probability that various causal factors, including sampling error, acted on a particular result.

Most psychology students and professors may endorse the local Type I error fallacy [which is] the mistaken belief that p < .05 given α = .05 means that the likelihood that the decision just taken to reject H0 is a type I error is less than 5%. […] p values from statistical tests are conditional probabilities of data, so they do not apply to any specific decision to reject H0. This is because any particular decision to do so is either right or wrong, so no probability is associated with it (other than 0 or 1.0). Only with sufficient replication could one determine whether a decision to reject H0 in a particular study was correct. […] the valid research hypothesis fallacy […] refers to the false belief that the probability that H1 is true is > .95, given p < .05. The complement of p is a probability, but 1 – p is just the probability of getting a result even less extreme under H0 than the one actually found. This fallacy is endorsed by most psychology students and professors”.

“[S]everal different false conclusions may be reached after deciding to reject or fail to reject H0. […] the magnitude fallacy is the false belief that low p values indicate large effects. […] p values are confounded measures of effect size and sample size […]. Thus, effects of trivial magnitude need only a large enough sample to be statistically significant. […] the zero fallacy […] is the mistaken belief that the failure to reject a nil hypothesis means that the population effect size is zero. Maybe it is, but you cannot tell based on a result in one sample, especially if power is low. […] The equivalence fallacy occurs when the failure to reject H0: µ1 = µ2 is interpreted as saying that the populations are equivalent. This is wrong because even if µ1 = µ2, distributions can differ in other ways, such as variability or distribution shape.”

“[T]he reification fallacy is the faulty belief that failure to replicate a result is the failure to make the same decision about H0 across studies […]. In this view, a result is not considered replicated if H0 is rejected in the first study but not in the second study. This sophism ignores sample size, effect size, and power across different studies. […] The sanctification fallacy refers to dichotomous thinking about continuous p values. […] Differences between results that are “significant” versus “not significant” by close margins, such as p = .03 versus p = .07 when α = .05, are themselves often not statistically significant. That is, relatively large changes in p can correspond to small, nonsignificant changes in the underlying variable (Gelman & Stern, 2006). […] Classical parametric statistical tests are not robust against outliers or violations of distributional assumptions, especially in small, unrepresentative samples. But many researchers believe just the opposite, which is the robustness fallacy. […] most researchers do not provide evidence about whether distributional or other assumptions are met”.

“Many [of the above] fallacies involve wishful thinking about things that researchers really want to know. These include the probability that H0 or H1 is true, the likelihood of replication, and the chance that a particular decision to reject H0 is wrong. Alas, statistical tests tell us only the conditional probability of the data. […] But there is [however] a method that can tell us what we want to know. It is not a statistical technique; rather, it is good, old-fashioned replication, which is also the best way to deal with the problem of sampling error. […] Statistical significance provides even in the best case nothing more than low-level support for the existence of an effect, relation, or difference. That best case occurs when researchers estimate a priori power, specify the correct construct definitions and operationalizations, work with random or at least representative samples, analyze highly reliable scores in distributions that respect test assumptions, control other major sources of imprecision besides sampling error, and test plausible null hypotheses. In this idyllic scenario, p values from statistical tests may be reasonably accurate and potentially meaningful, if they are not misinterpreted. […] The capability of significance tests to address the dichotomous question of whether effects, relations, or differences are greater than expected levels of sampling error may be useful in some new research areas. Due to the many limitations of statistical tests, this period of usefulness should be brief. Given evidence that an effect exists, the next steps should involve estimation of its magnitude and evaluation of its substantive significance, both of which are beyond what significance testing can tell us. […] It should be a hallmark of a maturing research area that significance testing is not the primary inference method.”

“[An] effect size [is] a quantitative reflection of the magnitude of some phenomenon used for the sake of addressing a specific research question. In this sense, an effect size is a statistic (in samples) or parameter (in populations) with a purpose, that of quantifying a phenomenon of interest. more specific definitions may depend on study design. […] cause size refers to the independent variable and specifically to the amount of change in it that produces a given effect on the dependent variable. A related idea is that of causal efficacy, or the ratio of effect size to the size of its cause. The greater the causal efficacy, the more that a given change on an independent variable results in proportionally bigger changes on the dependent variable. The idea of cause size is most relevant when the factor is experimental and its levels are quantitative. […] An effect size measure […] is a named expression that maps data, statistics, or parameters onto a quantity that represents the magnitude of the phenomenon of interest. This expression connects dimensions or generalized units that are abstractions of variables of interest with a specific operationalization of those units.”

“A good effect size measure has the [following properties:] […] 1. Its scale (metric) should be appropriate for the research question. […] 2. It should be independent of sample size. […] 3. As a point estimate, an effect size should have good statistical properties; that is, it should be unbiased, consistent […], and efficient […]. 4. The effect size [should be] reported with a confidence interval. […] Not all effect size measures […] have all the properties just listed. But it is possible to report multiple effect sizes that address the same question in order to improve the communication of the results.” 

“Examples of outcomes with meaningful metrics include salaries in dollars and post-treatment survival time in years. means or contrasts for variables with meaningful units are unstandardized effect sizes that can be directly interpreted. […] In medical research, physical measurements with meaningful metrics are often available. […] But in psychological research there are typically no “natural” units for abstract, nonphysical constructs such as intelligence, scholastic achievement, or self-concept. […] Therefore, metrics in psychological research are often arbitrary instead of meaningful. An example is the total score for a set of true-false items. Because responses can be coded with any two different numbers, the total is arbitrary. Standard scores such as percentiles and normal deviates are arbitrary, too […] Standardized effect sizes can be computed for results expressed in arbitrary metrics. Such effect sizes can also be directly compared across studies where outcomes have different scales. this is because standardized effect sizes are based on units that have a common meaning regardless of the original metric.”

“1. It is better to report unstandardized effect sizes for outcomes with meaningful metrics. This is because the original scale is lost when results are standardized. 2. Unstandardized effect sizes are best for comparing results across different samples measured on the same outcomes. […] 3. Standardized effect sizes are better for comparing conceptually similar results based on different units of measure. […] 4. Standardized effect sizes are affected by the corresponding unstandardized effect sizes plus characteristics of the study, including its design […], whether factors are fixed or random, the extent of error variance, and sample base rates. This means that standardized effect sizes are less directly comparable over studies that differ in their designs or samples. […] 5. There is no such thing as T-shirt effect sizes (Lenth, 2006– 2009) that classify standardized effect sizes as “small,” “medium,” or “large” and apply over all research areas. This is because what is considered a large effect in one area may be seen as small or trivial in another. […] 6. There is usually no way to directly translate standardized effect sizes into implications for substantive significance. […] It is standardized effect sizes from sets of related studies that are analyzed in most meta analyses.”

July 16, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Personality Puzzle (IV)

Below I have added a few quotes from the last 100 pages of the book. This will be my last post about the book.

“Carol Dweck and her colleagues claim that two […] kinds of goals are […] important […]. One kind she calls judgment goals. Judgment, in this context, refers to seeking to judge or validate an attribute in oneself. For example, you might have the goal of convincing yourself that you are smart, beautiful, or popular. The other kind she calls development goals. A development goal is the desire to actually improve oneself, to become smarter, more beautiful, or more popular. […] From the perspective of Dweck’s theory, these two kinds of goals are important in many areas of life because they produce different reactions to failure, and everybody fails sometimes. A person with a development goal will respond to failure with what Dweck calls a mastery-oriented pattern, in which she tries even harder the next time. […] In contrast, a person with a judgment goal responds to failure with what Dweck calls the helpless pattern: Rather than try harder, this individual simply concludes, “I can’t do it,” and gives up. Of course, that only guarantees more failure in the future. […] Dweck believes [the goals] originate in different kinds of implicit theories about the nature of the world […] Some people hold what Dweck calls entity theories, and believe that personal qualities such as intelligence and ability are unchangeable, leading them to respond helplessly to any indication that they do not have what it takes. Other people hold incremental theories, believing that intelligence and ability can change with time and experience. Their goals, therefore, involve not only proving their competence but increasing it.”

(I should probably add here that any sort of empirical validation of those theories and their consequences are, aside from a brief discussion of the results of a few (likely weak, low-powered) studies, completely absent in the book, but this kind of stuff might even so be worth having in mind, which was why I included this quote in my coverage – US).

“A large amount of research suggests that low self-esteem […] is correlated with outcomes such as dissatisfaction with life, hopelessness, and depression […] as well as loneliness […] Declines in self-esteem also appear to cause outcomes including depression, lower satisfaction with relationships, and lower satisfaction with one’s career […] Your self-esteem tends to suffer when you have failed in the eyes of your social group […] This drop in self-esteem may be a warning about possible rejection or even social ostracism — which, for our distant ancestors, could literally be fatal — and motivate you to restore your reputation. High self-esteem, by contrast, may indicate success and acceptance. Attempts to bolster self-esteem can backfire. […] People who self-enhance — who think they are better than the other people who know them think they are — can run into problems in relations with others, mental health, and adjustment […] Narcissism is associated with high self-esteem that is brittle and unstable because it is unrealistic […], and unstable self-esteem may be worse than low self-esteem […] The bottom line is that promoting psychological health requires something more complex than simply trying to make everybody feel better about themselves […]. The best way to raise self-esteem is through accomplishments that increase it legitimately […]. The most important aspect of your opinion of yourself is not whether it is good or bad, but the degree to which it is accurate.”

“An old theory suggested that if you repeated something over and over in your mind, such rehearsal was sufficient to move the information into long-term memory (LTM), or permanent memory storage. Later research showed that this idea is not quite correct. The best way to get information into LTM, it turns out, is not just to repeat it, but to really think about it (a process called elaboration). The longer and more complex the processing that a piece of information receives, the more likely it is to get transferred into LTM”.

“Concerning mental health, aspects of personality can become so extreme as to cause serious problems. When this happens, psychologists begin to speak of personality disorders […] Personality disorders have five general characteristics. They are (1) unusual and, (2) by definition, tend to cause problems. In addition, most but not quite all personality disorders (3) affect social relations and (4) are stable over time. Finally, (5) in some cases, the person who has a personality disorder may see it not as a disorder at all, but a basic part of who he or she is. […] personality disorders can be ego-syntonic, which means the people who have them do not think anything is wrong. People who suffer from other kinds of mental disorder generally experience their symptoms of confusion, depression, or anxiety as ego-dystonic afflictions of which they would like to be cured. For a surprising number of people with personality disorders, in contrast, their symptoms feel like normal and even valued aspects of who they are. Individuals with the attributes of the antisocial or narcissistic personality disorders, in particular, typically do not think they have a problem.”

[One side-note: It’s important to be aware of the fact that not all people who display unusual behavioral patterns which are causing them problems necessarily suffer from a personality disorder. Other categorization schemes also exist. Autism is for example not categorized as a personality disorder, but is rather considered to be a (neuro)developmental disorder. Funder does not go into this kind of stuff in his book but I thought it might be worth mentioning here – US]

“Some people are more honest than others, but when deceit and manipulation become core aspects of an individual’s way of dealing with the world, he may be diagnosed with antisocial personality disorder. […] People with this disorder are impulsive, and engage in risky behaviors […] They typically are irritable, aggressive, and irresponsible. The damage they do to others bothers them not one whit; they rationalize […] that life is unfair; the world is full of suckers; and if you don’t take what you want whenever you can, then you are a sucker too. […] A wide variety of negative outcomes may accompany this disorder […] Antisocial personality disorder is sometimes confused with the trait of psychopathy […] but it’s importantly different […] Psychopaths are emotionally cold, they disregard social norms, and they are manipulative and often cunning. Most psychopaths meet the criteria for antisocial personality disorder, but the reverse is not true.”

“From day to day with different people, and over time with the same people, most individuals feel and act pretty consistently. […] Predictability makes it possible to deal with others in a reasonable way, and gives each of us a sense of individual identity. But some people are less consistent than others […] borderline personality disorder […] is characterized by unstable and confused behavior, a poor sense of identity, and patterns of self-harm […] Their chaotic thoughts, emotions, and behaviors make persons suffering from this disorder very difficult for others to “read” […] Borderline personality disorder (BPD) entails so many problems for the affected person that nobody doubts that it is, at the very least, on the “borderline” with severe psychopathology.5 Its hallmark is emotional instability. […] All of the personality disorders are rather mixed bags of indicators, and BPD may be the most mixed of all. It is difficult to find a coherent, common thread among its characteristics […] Some psychologists […] have suggested that this [personality disorder] category is too diffuse and should be abandoned.”

“[T]he modern research literature on personality disorders has come close to consensus about one conclusion: There is no sharp dividing line between psychopathology and normal variation (L. A. Clark & Watson, 1999a; Furr & Funder, 1998; Hong & Paunonen, 2011; Krueger & Eaton, 2010; Krueger & Tackett, 2003; B. P. O’Connor, 2002; Trull & Durrett, 2005).”

“Accurate self-knowledge has long been considered a hallmark of mental health […] The process for gaining accurate self-knowledge is outlined by the Realistic Accuracy Model […] according to RAM, one can gain accurate knowledge of anyone’s personality through a four-stage process. First, the person must do something relevant to the trait being judged; second, the information must be available to the judge; third, the judge must detect this information; and fourth, the judge must utilize the information correctly. This model was initially developed to explain the accuracy of judgments of other people. In an important sense, though, you are just one of the people you happen to know, and, to some degree, you come to know yourself the same way you find out about anybody else — by observing what you do and trying to draw appropriate conclusions”.

“[P]ersonality is not just something you have; it is also something you do. The unique aspects of what you do comprise the procedural self, and your knowledge of this self typically takes the form of procedural knowledge. […] The procedural self is made up of the behaviors through which you express who you think you are, generally without knowing you are doing so […]. Like riding a bicycle, the working of the procedural self is automatic and not very accessible to conscious awareness.”

July 14, 2017 Posted by | Books, Psychology | Leave a comment

Beyond Significance Testing (II)

I have added some more quotes and observations from the book below.

“The least squares estimators M and s2 are not robust against the effects of extreme scores. […] Conventional methods to construct confidence intervals rely on sample standard deviations to estimate standard errors. These methods also rely on critical values in central test distributions, such as t and z, that assume normality or homoscedasticity […] Such distributional assumptions are not always plausible. […] One option to deal with outliers is to apply transformations, which convert original scores with a mathematical operation to new ones that may be more normally distributed. The effect of applying a monotonic transformation is to compress one part of the distribution more than another, thereby changing its shape but not the rank order of the scores. […] It can be difficult to find a transformation that works in a particular data set. Some distributions can be so severely nonnormal that basically no transformation will work. […] An alternative that also deals with departures from distributional assumptions is robust estimation. Robust (resistant) estimators are generally less affected than least squares estimators by outliers or nonnormality.”

“An estimator’s quantitative robustness can be described by its finite-sample breakdown point (BP), or the smallest proportion of scores that when made arbitrarily very large or small renders the statistic meaningless. The lower the value of BP, the less robust the estimator. For both M and s2, BP = 0, the lowest possible value. This is because the value of either statistic can be distorted by a single outlier, and the ratio 1/N approaches zero as sample size increases. In contrast, BP = .50 for the median because its value is not distorted by arbitrarily extreme scores unless they make up at least half the sample. But the median is not an optimal estimator because its value is determined by a single score, the one at the 50th percentile. In this sense, all the other scores are discarded by the median. A compromise between the sample mean and the median is the trimmed mean. A trimmed mean Mtr is calculated by (a) ordering the scores from lowest to highest, (b) deleting the same proportion of the most extreme scores from each tail of the distribution, and then (c) calculating the average of the scores that remain. […] A common practice is to trim 20% of the scores from each tail of the distribution when calculating trimmed estimators. This proportion tends to maintain the robustness of trimmed means while minimizing their standard errors when sampling from symmetrical distributions […] For 20% trimmed means, BP = .20, which says they are robust against arbitrarily extreme scores unless such outliers make up at least 20% of the sample.”

The standard H0 is both a point hypothesis and a nil hypothesis. A point hypothesis specifies the numerical value of a parameter or the difference between two or more parameters, and a nil hypothesis states that this value is zero. The latter is usually a prediction that an effect, difference, or association is zero. […] Nil hypotheses as default explanations may be fine in new research areas when it is unknown whether effects exist at all. But they are less suitable in established areas when it is known that some effect is probably not zero. […] Nil hypotheses are tested much more often than non-nil hypotheses even when the former are implausible. […] If a nil hypothesis is implausible, estimated probabilities of data will be too low. This means that risk for Type I error is basically zero and a Type II error is the only possible kind when H0 is known in advance to be false.”

“Too many researchers treat the conventional levels of α, either .05 or .01, as golden rules. If other levels of α are specifed, they tend to be even lower […]. Sanctification of .05 as the highest “acceptable” level is problematic. […] Instead of blindly accepting either .05 or .01, one does better to […] [s]pecify a level of α that reflects the desired relative seriousness (DRS) of Type I error versus Type II error. […] researchers should not rely on a mechanical ritual (i.e., automatically specify .05 or .01) to control risk for Type I error that ignores the consequences of Type II error.”

“Although p and α are derived in the same theoretical sampling distribution, p does not estimate the conditional probability of a Type I error […]. This is because p is based on a range of results under H0, but α has nothing to do with actual results and is supposed to be specified before any data are collected. Confusion between p and α is widespread […] To differentiate the two, Gigerenzer (1993) referred to p as the exact level of significance. If p = .032 and α = .05, H0 is rejected at the .05 level, but .032 is not the long-run probability of Type I error, which is .05 for this example. The exact level of significance is the conditional probability of the data (or any result even more extreme) assuming H0 is true, given all other assumptions about sampling, distributions, and scores. […] Because p values are estimated assuming that H0 is true, they do not somehow measure the likelihood that H0 is correct. […] The false belief that p is the probability that H0 is true, or the inverse probability error […] is widespread.”

“Probabilities from significance tests say little about effect size. This is because essentially any test statistic (TS) can be expressed as the product TS = ES × f(N) […] where ES is an effect size and f(N) is a function of sample size. This equation explains how it is possible that (a) trivial effects can be statistically significant in large samples or (b) large effects may not be statistically significant in small samples. So p is a confounded measure of effect size and sample size.”

“Power is the probability of getting statistical significance over many random replications when H1 is true. it varies directly with sample size and the magnitude of the population effect size. […] This combination leads to the greatest power: a large population effect size, a large sample, a higher level of α […], a within-subjects design, a parametric test rather than a nonparametric test (e.g., t instead of Mann–Whitney), and very reliable scores. […] Power .80 is generally desirable, but an even higher standard may be need if consequences of Type II error are severe. […] Reviews from the 1970s and 1980s indicated that the typical power of behavioral science research is only about .50 […] and there is little evidence that power is any higher in more recent studies […] Ellis (2010) estimated that < 10% of studies have samples sufficiently large to detect smaller population effect sizes. Increasing sample size would address low power, but the number of additional cases necessary to reach even nominal power when studying smaller effects may be so great as to be practically impossible […] Too few researchers, generally < 20% (Osborne, 2008), bother to report prospective power despite admonitions to do so […] The concept of power does not stand without significance testing. as statistical tests play a smaller role in the analysis, the relevance of power also declines. If significance tests are not used, power is irrelevant. Cumming (2012) described an alternative called precision for research planning, where the researcher specifies a target margin of error for estimating the parameter of interest. […] The advantage over power analysis is that researchers must consider both effect size and precision in study planning.”

“Classical nonparametric tests are alternatives to the parametric t and F tests for means (e.g., the Mann–Whitney test is the nonparametric analogue to the t test). Nonparametric tests generally work by converting the original scores to ranks. They also make fewer assumptions about the distributions of those ranks than do parametric tests applied to the original scores. Nonparametric tests date to the 1950s–1960s, and they share some limitations. One is that they are not generally robust against heteroscedasticity, and another is that their application is typically limited to single-factor designs […] Modern robust tests are an alternative. They are generally more flexible than nonparametric tests and can be applied in designs with multiple factors. […] At the end of the day, robust statistical tests are subject to many of the same limitations as other statistical tests. For example, they assume random sampling albeit from population distributions that may be nonnormal or heteroscedastic; they also assume that sampling error is the only source of error variance. Alternative tests, such as the Welch–James and Yuen–Welch versions of a robust t test, do not always yield the same p value for the same data, and it is not always clear which alternative is best (Wilcox, 2003).”

July 11, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Personality Puzzle (III)

I have added some more quotes and observations from the book below.

“Across many, many traits, the average correlation across MZ twins is about .60, and across DZ twins it is about .40, when adjusted for age and gender […] This means that, according to twin studies, the average heritability of many traits is about .40, which is interpreted to mean that 40 percent of phenotypic (behavioral) variance is accounted for by genetic variance. The heritabilities of the Big Five traits are a bit higher; according to one comprehensive summary they range from .42, for agreeableness, to .57, for openness (Bouchard, 2004). […] behavioral genetic analyses and the statistics they produce refer to groups or populations, not individuals. […] when research concludes that a personality trait is, say, 50 percent heritable, this does not mean that half of the extent to which an individual expresses that trait is determined genetically. Instead, it means that 50 percent of the degree to which the trait varies across the population can be attributed to genetic variation. […] Because heritability is the proportion of variation due to genetic influences, if there is no variation, then the heritability must approach zero. […] Heritability statistics are not the nature-nurture ratio; a biologically determined trait can have a zero heritability.”

The environment can […] affect heritability […]. For example, when every child receives adequate nutrition, variance in height is genetically controlled. […] But in an environment where some are well fed while others go hungry, variance in height will fall more under the control of the environment. Well-fed children will grow near the maximum of their genetic potential while poorly fed children will grow closer to their genetic minimum, and the height of the parents will not matter so much; the heritability coeffcient for height will be much closer to 0. […] A trait that is adaptive in one situation may be harmful in another […] the same environments that promote good outcomes for some people can promote bad outcomes for others, and vice versa […] More generally, the same circumstances might be experienced as stressful, enjoyable, or boring, depending on the genetic predispositions of the individuals involved; these variations in experience can lead to very different behaviors and, over time, to the development of different personality traits.”

Mihalyi Csikszentmihalyi [argued] that the best way a person can spend time is in autotelic activities, those that are enjoyable for their own sake. The subjective experience of an autotelic activity — the enjoyment itself — is what Csikszentmihalyi calls flow.
Flow is not the same thing as joy, happiness, or other, more familiar terms for subjective well-being. Rather, the experience of flow is characterized by tremendous concentration, total lack of distractibility, and thoughts concerning only the activity at hand. […] Losing track of time is one sign of experiencing flow. According to Csikszentmihalyi, flow arises when the challenges an activity presents are well matched with your skills. If an activity is too diffcult or too confusing, you will experience anxiety, worry, and frustration. If the activity is too easy, you will experience boredom and (again) anxiety. But when skills and challenges are balanced, you experience flow. […] Csikszentmihalyi thinks that the secret for enhancing your quality of life is to spend as much time in flow as possible. Achieving flow entails becoming good at something you find worthwhile and enjoyable. […] Even in the best of circumstances [however], flow seems to describe a rather solitary kind of happiness. […] The drawback with flow is that somebody experiencing it can be difficult to interact with”. [I really did not like most of the stuff included in the part of the book from which this quote is taken, but I did find Csikszentmihalyi’s flow concept quite interesting.]

“About 80 percent of the participants in psychological research come from countries that are Western, Educated, Industrialized, Rich, and Democratic — ”WEIRD” in other words — although only 12 percent of the world’s population live there (Henrich et al., 2010).”

“If an animal or a person performs a behavior, and the behavior is followed by a good result — a reinforcement — the behavior becomes more likely. If the behavior is followed by a punishment, it becomes less likely. […] the results of operant conditioning are not necessarily logical. It can increase the frequency of any behavior, regardless of its real connection with the consequences that follow.”

“A punishment is an aversive consequence that follows an act in order to stop it and prevent its repetition. […] Many people believe the only way to stop or prevent somebody from doing something is punishment. […] You can [however] use reward for this purpose too. All you have to do is find a response that is incompatible with the one you are trying to get rid of, and reward that incompatible response instead. Reward a child for reading instead of punishing him for watching television. […] punishment works well when it is done right. The only problem is, it is almost never done right. […] One way to see how punishment works, or fails to work, is to examine the rules for applying it correctly. The classic behaviorist analysis says that five principles are most important […] 1. Availability of Alternatives: An alternative response to the behavior that is being punished must be available. This alternative response must not be punished and should be rewarded. […] 2. Behavioral and Situational Specificity: Be clear about exactly what behavior you are punishing and the circumstances under which it will and will not be punished. […] 3. Timing and Consistency: To be effective, a punishment needs to be applied immediately after the behavior you wish to prevent, every time that behavior occurs. Otherwise, the person (or animal) being punished may not understand which behavior is forbidden. […] 4. Conditioning Secondary Punishing Stimuli: One can lessen the actual use of punishment by conditioning secondary stimuli to it [such as e.g.  verbal warnings] […] 5. Avoiding Mixed Messages: […] Sometimes, after punishing a child, the parent feels so guilty that she picks the child up for a cuddle. This is a mistake. The child might start to misbehave just to get the cuddle that follows the punishment. Punish if you must punish, but do not mix your message. A variant on this problem occurs when the child learns to play one parent against the other. For example, after the father punishes the child, the child goes to the mother for sympathy, or vice versa. This can produce the same counterproductive result.”

Punishment will backfire unless all of the guidelines [above] are followed. Usually, they are not. A punisher has to be extremely careful, for several reasons. […] The first and perhaps most important danger of punishment is that it creates emotion. […] powerful emotions are not conducive to clear thinking. […] Punishment [also] tends to vary with the punisher’s mood, which is one reason it is rarely applied consistently. […] Punishment [furthermore] [m]otivates [c]oncealment: The prospective punishee has good reasons to conceal behavior that might be punished. […] Rewards have the reverse effect. When workers anticipate rewards for good work instead of punishment for bad work, they are naturally motivated to bring to the boss’s attention everything they are doing, in case it merits reward.”

Gordon Allport observed years ago [that] [“]For some the world is a hostile place where men are evil and dangerous; for others it is a stage for fun and frolic. It may appear as a place to do one’s duty grimly; or a pasture for cultivating friendship and love.[“] […] people with different traits see the world differently. This perception affects how they react to the events in their lives which, in turn, affects what they do. […] People [also] differ in the emotions they experience, the emotions they want to experience, how strongly they experience emotions, how frequently their emotions change, and how well they understand and control their emotions.”

July 9, 2017 Posted by | Books, Genetics, Psychology | Leave a comment

Beyond Significance Testing (I)

“This book introduces readers to the principles and practice of statistics reform in the behavioral sciences. it (a) reviews the now even larger literature about shortcomings of significance testing; (b) explains why these criticisms have sufficient merit to justify major changes in the ways researchers analyze their data and report the results; (c) helps readers acquire new skills concerning interval estimation and effect size estimation; and (d) reviews alternative ways to test hypotheses, including Bayesian estimation. […] I assume that the reader has had undergraduate courses in statistics that covered at least the basics of regression and factorial analysis of variance. […] This book is suitable as a textbook for an introductory course in behavioral science statistics at the graduate level.”

I’m currently reading this book. I have so far read 8 out of the 10 chapters included, and I’m currently sort of hovering between a 3 and 4 star goodreads rating; some parts of the book are really great, but there are also a few aspects I don’t like. Some parts of the coverage are rather technical and I’m still debating to which extent I should cover the technical stuff in detail later here on the blog; there are quite a few equations included in the book and I find it annoying to cover math using the wordpress format of this blog. For now I’ll start out with a reasonably non-technical post with some quotes and key ideas from the first parts of the book.

“In studies of intervention outcomes, a statistically significant difference between treated and untreated cases […] has nothing to do with whether treatment leads to any tangible benefits in the real world. In the context of diagnostic criteria, clinical significance concerns whether treated cases can no longer be distinguished from control cases not meeting the same criteria. For example, does treatment typically prompt a return to normal levels of functioning? A treatment effect can be statistically significant yet trivial in terms of its clinical significance, and clinically meaningful results are not always statistically significant. Accordingly, the proper response to claims of statistical significance in any context should be “so what?” — or, more pointedly, “who cares?” — without more information.”

“There are free computer tools for estimating power, but most researchers — probably at least 80% (e.g., Ellis, 2010) — ignore the power of their analyses. […] Ignoring power is regrettable because the median power of published nonexperimental studies is only about .50 (e.g., Maxwell, 2004). This implies a 50% chance of correctly rejecting the null hypothesis based on the data. In this case the researcher may as well not collect any data but instead just toss a coin to decide whether or not to reject the null hypothesis. […] A consequence of low power is that the research literature is often difficult to interpret. Specifically, if there is a real effect but power is only .50, about half the studies will yield statistically significant results and the rest will yield no statistically significant findings. If all these studies were somehow published, the number of positive and negative results would be roughly equal. In an old-fashioned, narrative review, the research literature would appear to be ambiguous, given this balance. It may be concluded that “more research is needed,” but any new results will just reinforce the original ambiguity, if power remains low.”

“Statistical tests of a treatment effect that is actually clinically significant may fail to reject the null hypothesis of no difference when power is low. If the researcher in this case ignored whether the observed effect size is clinically significant, a potentially beneficial treatment may be overlooked. This is exactly what was found by Freiman, Chalmers, Smith, and Kuebler (1978), who reviewed 71 randomized clinical trials of mainly heart- and cancer-related treatments with “negative” results (i.e., not statistically significant). They found that if the authors of 50 of the 71 trials had considered the power of their tests along with the observed effect sizes, those authors should have concluded just the opposite, or that the treatments resulted in clinically meaningful improvements.”

“Even if researchers avoided the kinds of mistakes just described, there are grounds to suspect that p values from statistical tests are simply incorrect in most studies: 1. They (p values) are estimated in theoretical sampling distributions that assume random sampling from known populations. Very few samples in behavioral research are random samples. Instead, most are convenience samples collected under conditions that have little resemblance to true random sampling. […] 2. Results of more quantitative reviews suggest that, due to assumptions violations, there are few actual data sets in which significance testing gives accurate results […] 3. Probabilities from statistical tests (p values) generally assume that all other sources of error besides sampling error are nil. This includes measurement error […] Other sources of error arise from failure to control for extraneous sources of variance or from flawed operational definitions of hypothetical constructs. It is absurd to assume in most studies that there is no error variance besides sampling error. Instead it is more practical to expect that sampling error makes up the small part of all possible kinds of error when the number of cases is reasonably large (Ziliak & mcCloskey, 2008).”

“The p values from statistical tests do not tell researchers what they want to know, which often concerns whether the data support a particular hypothesis. This is because p values merely estimate the conditional probability of the data under a statistical hypothesis — the null hypothesis — that in most studies is an implausible, straw man argument. In fact, p values do not directly “test” any hypothesis at all, but they are often misinterpreted as though they describe hypotheses instead of data. Although p values ultimately provide a yes-or-no answer (i.e., reject or fail to reject the null hypothesis), the question — p < a?, where a is the criterion level of statistical significance, usually .05 or .01 — is typically uninteresting. The yes-or-no answer to this question says nothing about scientific relevance, clinical significance, or effect size. […] determining clinical significance is not just a matter of statistics; it also requires strong knowledge about the subject matter.”

“[M]any null hypotheses have little if any scientific value. For example, Anderson et al. (2000) reviewed null hypotheses tested in several hundred empirical studies published from 1978 to 1998 in two environmental sciences journals. They found many implausible null hypotheses that specified things such as equal survival probabilities for juvenile and adult members of a species or that growth rates did not differ across species, among other assumptions known to be false before collecting data. I am unaware of a similar survey of null hypotheses in the behavioral sciences, but I would be surprised if the results would be very different.”

“Hoekstra, Finch, Kiers, and Johnson (2006) examined a total of 266 articles published in Psychonomic Bulletin & Review during 2002–2004. Results of significance tests were reported in about 97% of the articles, but confidence intervals were reported in only about 6%. Sadly, p values were misinterpreted in about 60% of surveyed articles. Fidler, Burgman, Cumming, Buttrose, and Thomason (2006) sampled 200 articles published in two different biology journals. Results of significance testing were reported in 92% of articles published during 2001–2002, but this rate dropped to 78% in 2005. There were also corresponding increases in the reporting of confidence intervals, but power was estimated in only 8% and p values were misinterpreted in 63%. […] Sun, Pan, and Wang (2010) reviewed a total of 1,243 works published in 14 different psychology and education journals during 2005–2007. The percentage of articles reporting effect sizes was 49%, and 57% of these authors interpreted their effect sizes.”

“It is a myth that the larger the sample, the more closely it approximates a normal distribution. This idea probably stems from a misunderstanding of the central limit theorem, which applies to certain group statistics such as means. […] This theorem justifies approximating distributions of random means with normal curves, but it does not apply to distributions of scores in individual samples. […] larger samples do not generally have more normal distributions than smaller samples. If the population distribution is, say, positively skewed, this shape will tend to show up in the distributions of random samples that are either smaller or larger.”

“A standard error is the standard deviation in a sampling distribution, the probability distribution of a statistic across all random samples drawn from the same population(s) and with each sample based on the same number of cases. It estimates the amount of sampling error in standard deviation units. The square of a standard error is the error variance. […] Variability of the sampling distributions […] decreases as the sample size increases. […] The standard error sM, which estimates variability of the group statistic M, is often confused with the standard deviation s, which measures variability at the case level. This confusion is a source of misinterpretation of both statistical tests and confidence intervals […] Note that the standard error sM itself has a standard error (as do standard errors for all other kinds of statistics). This is because the value of sM varies over random samples. This explains why one should not overinterpret a confidence interval or p value from a significance test based on a single sample.”

“Standard errors estimate sampling error under random sampling. What they measure when sampling is not random may not be clear. […] Standard errors also ignore […] other sources of error [:] 1. Measurement error [which] refers to the difference between an observed score X and the true score on the underlying construct. […] Measurement error reduces absolute effect sizes and the power of statistical tests. […] 2. Construct definition error [which] involves problems with how hypothetical constructs are defined or operationalized. […] 3. Specification error [which] refers to the omission from a regression equation of at least one predictor that covaries with the measured (included) predictors. […] 4. Treatment implementation error occurs when an intervention does not follow prescribed procedures. […] Gosset used the term real error to refer all types of error besides sampling error […]. In reasonably large samples, the impact of real error may be greater than that of sampling error.”

“The technique of bootstrapping […] is a computer-based method of resampling that recombines the cases in a data set in different ways to estimate statistical precision, with fewer assumptions than traditional methods about population distributions. Perhaps the best known form is nonparametric bootstrapping, which generally makes no assumptions other than that the distribution in the sample reflects the basic shape of that in the population. It treats your data file as a pseudo-population in that cases are randomly selected with replacement to generate other data sets, usually of the same size as the original. […] The technique of nonparametric bootstrapping seems well suited for interval estimation when the researcher is either unwilling or unable to make a lot of assumptions about population distributions. […] potential limitations of nonparametric bootstrapping: 1. Nonparametric bootstrapping simulates random sampling, but true random sampling is rarely used in practice. […] 2. […] If the shape of the sample distribution is very different compared with that in the population, results of nonparametric bootstrapping may have poor external validity. 3. The “population” from which bootstrapped samples are drawn is merely the original data file. If this data set is small or the observations are not independent, resampling from it will not somehow fix these problems. In fact, resampling can magnify the effects of unusual features in a small data set […] 4. Results of bootstrap analyses are probably quite biased in small samples, but this is true of many traditional methods, too. […] [In] parametric bootstrapping […] the researcher specifies the numerical and distributional properties of a theoretical probability density function, and then the computer randomly samples from that distribution. When repeated many times by the computer, values of statistics in these synthesized samples vary randomly about the parameters specified by the researcher, which simulates sampling error.”

July 9, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

A few SSC comments

I recently left a few comments in an open thread on SSC, and I figured it might make sense to crosspost some of the comments made there here on the blog. I haven’t posted all my contributions to the debate here, rather I’ve just quoted some specific comments and observations which might be of interest. I’ve also added some additional remarks and comments which relate to the topics discussed. Here’s the main link (scroll down to get to my comments).

“One thing worth keeping in mind when evaluating pre-modern medicine characterizations of diabetes and the natural history of diabetes is incidentally that especially to the extent that one is interested in type 1 survivorship bias is a major problem lurking in the background. Prognostic estimates of untreated type 1 based on historical accounts of how long people could live with the disease before insulin are not in my opinion likely to be all that reliable, because the type of patients that would be recognized as (type 1) diabetics back then would tend to mainly be people who had the milder forms, because they were the only ones who lived long enough to reach a ‘doctor’; and the longer they lived, and the milder the sub-type, the more likely they were to be studied/’diagnosed’. I was a 2-year old boy who got unwell on a Tueday and was hospitalized three days later. Avicenna would have been unlikely to have encountered me, I’d have died before he saw me. (Similar lines of reasoning might lead to an argument that the incidence of diseases like type 1 diabetes may also today be underdiagnosed in developing countries with poorly developed health care systems.)”

Douglas Knight mentioned during our exchange that medical men of the far past might have been more likely to attend to patients with acute illnesses than patients with chronic conditions, making them more likely to attend to such cases than would otherwise be the case, a point I didn’t discuss in any detail during the exchange. I did however think it important to note here that information exchange was significantly slower, and transportation costs were much higher, in the past than they are today. This should make such a bias less relevant, all else equal. Avicenna and his colleagues couldn’t take a taxi, or learn by phone that X is sick. He might have preferentially attended to the acute cases he learned about, but given high transportation costs and inefficient communication channels he might often never arrive in time, or at all. A particular problem here is that there are no good data on the unobserved cases, because the only cases we know about today are the ones people like him have told us about.

Some more comments:

“One thing I was considering adding to my remarks about survivorship bias is that it is not in my opinion unlikely that what you might term the nature of the disease has changed over the centuries; indeed it might still be changing today. Globally the incidence of type 1 has been increasing for decades and nobody seems to know why, though there’s consensus about an environmental trigger playing a major role. Maybe incidence is not the only thing that’s changed, maybe e.g. the time course of the ‘average case’ has also changed? Maybe due to secondary factors; better nutritional status now equals slower progression of beta cell failure than was the case in the past? Or perhaps the other way around: Less exposure to bacterial agents the immune system throughout evolutionary time has been used to having to deal with today means that the autoimmune process is accelerated today, compared to in the far past where standards of hygiene were different. Who knows? […] Maybe survivorship bias wasn’t that big of a deal, but I think one should be very cautious about which assumptions one might implicitly be making along the way when addressing questions of this sort of nature. Some relevant questions will definitely be unknowable due to lack of good data which we will never be able to obtain.”

I should perhaps interpose here that even if survivorship bias ‘wasn’t that big of a deal’, it’s still sort of a big problem in the analytical setting because it seems perfectly plausible to me to be making the assumption that it might even so have been a big deal. These kinds of problems magnify our error bars and reduce confidence in our conclusions, regardless of the extent to which they actually played a role. When you know the exact sign and magnitude of a given moderating effect you can try to correct for it, but this is very difficult to do when a large range of moderator effect sizes might be considered plausible. It might also here be worth mentioning explicitly that biases such as the survivorship bias mentioned can of course impact a lot of things besides just the prognostic estimates; for example if a lot of cases never come to the attention of the medical people because these people were unavailable (due to distance, cost, lack of information, etc.) to the people who were sick, incidence and prevalence will also implicitly be underestimated. And so on. Back to the comments:

“Once you had me thinking that it might have been harder [for people in the past] to distinguish [between type 1 and type 2 diabetes] than […] it is today, I started wondering about this, and the comments below relate to this topic. An idea that came to mind in relation to the type 1/type 2 distinction and the ability of people in the past to make this distinction: I’ve worked on various identification problems present in the diabetes context before, and I know that people even today make misdiagnoses and e.g. categorize type 1 diabetics as type 2. I asked a diabetes nurse working in the local endocrinology unit about this at one point, and she told me they had actually had a patient not long before then who had been admitted a short while after having been diagnosed with type 2. Turned out he was type 1, so the treatment failed. Misdiagnoses happen for multiple reasons, one is that obese people also sometimes develop type 1, and if it’s an acute onset setting the weight loss is not likely to be very significant. Patient history should in such a case provide the doctor with the necessary clues, but if the guy making the diagnosis is a stressed out GP who’s currently treating a lot of obese patients for type 2, mistakes happen. ‘Pre-scientific method’ this sort of individual would have been inconvenient to encounter, because a ‘counter-example’ like that supposedly demonstrating that the obese/thin(/young/old, acute/protracted…) distinction was ‘invalid’ might have held a lot more weight than it hopefully would today in the age of statistical analysis. A similar problem would be some of the end-stage individuals: A type 1 pre-insulin would be unlikely to live long enough to develop long term complications of the disease, but would instead die of DKA. The problem is that some untreated type 2 patients also die of DKA, though the degree of ketosis varies in type 2 patients. DKA in type 2 could e.g. be triggered by a superimposed cardiovascular event or an infection, increasing metabolic demands to an extent that can no longer be met by the organism, and so might well present just as acutely as it would in a classic acute-onset type 1 case. Assume the opposite bias you mention is playing a role; the ‘doctor’ in the past is more likely to see the patients in such a life-threatening setting than in the earlier stages. He observes a 55 year old fat guy dying in a very similar manner to the way a 12 year old girl died a few months back – very characteristic symptoms, breath smells fruity, Kussmaul respiration, polyuria and polydipsia…). What does he conclude? Are these different diseases?”

Making the doctor’s decision problem even harder is of course the fact that type 2 diabetes even today often goes undiagnosed until complications arise. Some type 2 patients get their diagnosis only after they had their first heart attack as a result of their illness. So the hypothetical obese middle-aged guy presenting with DKA might not have been known by anyone to be ‘a potentially different kind of diabetic’.

‘The Nybbler’ asked this question in the thread: “Wouldn’t reduced selection pressure be a major reason for increase of Type I diabetes? Used to be if you had it, chance of surviving to reproduce was close to nil.”

I’ll mention here that I’ve encountered this kind of theorizing before, but that I’ve never really addressed it – especially the second part – explicitly, though I’ve sometimes felt like doing that. I figured this post might be a decent place to at least scratch the surface. The idea that there are more type 1 diabetics now than there used to be because type 1 diabetics used to die of their disease and now they don’t (…and so now they are able to transmit their faulty genes to subsequent generations, leading to more diabetic individuals over time) sounds sort of reasonable if you don’t know very much about diabetes, but it sounds less reasonable to people who do. Genes matter, and changed selection pressures have probably played a role, but I find it hard to believe this particular mechanism is a major factor. I have included both my of my replies to ‘Nybbler’ below:

First comment:

“I’m not a geneticist and this is sort-of-kind-of near the boundary area of where I feel comfortable providing answers (given that others may be more qualified to evaluate questions like this than I am). However a few observations which might be relevant are the following:

i) Although I’ll later go on to say that vertical transmission is low, I first have to point out that some people who developed type 1 diabetes in the past did in fact have offspring, though there’s no doubt about the condition being fitness-reducing to a very large degree. The median age of diagnosis of type 1 is somewhere in the teenage years (…today. Was it the same way 1000 years ago, or has the age profile changed over time? This again relates to questions asked elsewhere in this discussion…), but people above the age of 30 get type 1 too.

ii) Although type 1 display some level of familia[l] clustering, most cases of type 1 are not the result of diabetics having had children who then proceed to inherit their parents’ disease. To the extent that reduced selection is a driver of increased incidence, the main cause would be broad selection effects pertaining to immune system functioning in general in the total population at risk (i.e. children in general, including many children with what might be termed suboptimal immune system functioning, being more likely to survive and later develop type 1 diabetes), not effects derived from vertical transmission of the disease (from parent to child). Roughly 90% of newly diagnosed type 1 diabetics in population studies have a negative family history of the disease, and on average only 2% of the children of type 1 diabetic mothers, and 5% of the children of type 1 diabetic fathers, go on to develop type 1 diabetes themselves.

iii) Historically vertical transmission has even in modern times been low. On top of the quite low transmission rates mentioned above, until well into the 80es or 90es many type 1 diabetic females were explicitly advised by their medical care providers not to have children, not because of the genetic risk of disease transmission but because pregnancy outcomes were likely to be poor; and many of those who disregarded the advice gave birth to offspring who were at a severe fitness disadvantage from the start. Poorly controlled diabetes during pregnancy leads to a very high risk of birth defects and/or miscarriage, and may pose health risks to the mother as well through e.g. an increased risk of preeclampsia (relevant link). It is only very recently that we’ve developed the knowledge and medical technology required to make pregnancy a reasonably safe option for female diabetics. You still had some diabetic females who gave birth before developing diabetes, like in the far past, and the situation was different for males, but either way I feel reasonably confident claiming that if you look for genetic causes of increasing incidence, vertical transmission should not be the main factor to consider.

iv) You need to be careful when evaluating questions like these to keep a distinction between questions relating to drivers of incidence and questions relating to drivers of prevalence at the back of your mind. These two sets of questions are not equivalent.

v) If people are interested to know more about the potential causes of increased incidence of type 1 diabetes, here’s a relevant review paper.”

I followed up with a second comment a while later, because I figured a few points of interest might not have been sufficiently well addressed in my first comment:

“@Nybbler:

A few additional remarks.

i) “Temporal trends in chronic disease incidence rates are almost certainly environmentally induced. If one observes a 50% increase in the incidence of a disorder over 20 yr, it is most likely the result of changes in the environment because the gene pool cannot change that rapidly. Type 1 diabetes is a very dynamic disease. […] results clearly demonstrate that the incidence of type 1 diabetes is rising, bringing with it a large public health problem. Moreover, these findings indicate that something in our environment is changing to trigger a disease response. […] With the exception of a possible role for viruses and infant nutrition, the specific environmental determinants that initiate or precipitate the onset of type 1 diabetes remain unclear.” (Type 1 Diabetes, Etiology and Treatment. Just to make it perfectly clear that although genes matter, environmental factors are the most likely causes of the rising levels of incidence we’ve seen in recent times.)

ii. Just as you need to always keep incidence and prevalence in mind when analyzing these things (for example low prevalence does not mean incidence is necessarily low, or was low in the past; low prevalence could also be a result of a combination of high incidence and high case mortality. I know from experience that even diabetes researchers tend to sometimes overlook stuff like this), you also need to keep the distinction between genotype and phenotype in mind. Given the increased importance of one or more environmental triggers in modern times, penetrance is likely to have changed over time. This means for example that ‘a diabetic genotype’ may have been less fitness reducing in the past than it is today, even if the associated ‘diabetic phenotype’ may on the other hand have been much more fitness reducing than it is now; people who developed diabetes died, but many of the people who might in the current environment be considered high-risk cases may not have been high risk in the far past, because the environmental trigger causing disease was absent, or rarely encountered. Assessing genetic risk for diabetes is complicated, and there’s no general formula for calculating this risk either in the type 1 or type 2 case; monogenic forms of diabetes do exist, but they account for a very small proportion of cases (1-5% of diabetes in young individuals) – most cases are polygenic and display variable levels of penetrance. Note incidentally that a story of environmental factors becoming more important over time is actually implicitly also, to the extent that diabetes is/has been fitness-reducing, a story of selection pressures against diabetic genotypes potentially increasing over time, rather than the opposite (which seems to be the default assumption when only taking into account stuff like the increased survival rates of type 1 diabetics over time). This stuff is complicated.”

I wasn’t completely happy with my second comment (I wrote it relatively fast and didn’t have time to go over it in detail after I’d written it), so I figured it might make sense to add a few details here. One key idea here is of course that you need to distinguish between people who are ‘vulnerable’ to developing type 1 diabetes, and people who actually do develop the disease. If fewer people who today would be considered ‘vulnerable’ developed the disease in the past than is the case now, selection against the ‘vulnerable’ genotype would – all else equal – have been lower throughout evolutionary time than it is today.

All else is not equal because of insulin treatment. But a second key point is that when you’re interested in fitness effects, mortality is not the only variable of interest; many diabetic women who were alive because of insulin during the 20th century but who were also being discouraged from having children may well have left no offspring. Males who committed suicide or died from kidney failure in their twenties likely also didn’t leave many offspring. Another point related to the mortality variable is that although diabetes mortality might in the past have been approximated reasonably well by a simple binary outcome variable/process (no diabetes = alive, diabetes = dead), type 1 diabetes has had large effects on mortality rates also throughout the chunk of history during which insulin has been a treatment option; mortality rates 3 or 4 times higher than those of non-diabetics are common in population studies, and such mortality rates add up over time even if base rates are low, especially in a fitness context, as they for most type 1 diabetics are at play throughout the entire fertile period of the life history. Type 2 diabetes is diagnosed mainly in middle-aged individuals, many of whom have already completed their reproductive cycle, but type 1 diabetes is very different in that respect. Of course there are multiple indirect effects at play as well here, e.g. those of mate choice; which is the more attractive potential partner, the individual with diabetes or the one without? What if the diabetic also happens to be blind?

A few other quotes from the comments:

“The majority of patients on insulin in the US are type 2 diabetics, and it is simply wrong that type 2 diabetics are not responsive to insulin treatment. They were likely found to be unresponsive in early trials because of errors of dosage, as they require higher levels of the drug to obtain the same effect as will young patients diagnosed with type 1 (the primary group on insulin in the 30es). However, insulin treatment is not the first-line option in the type 2 context because the condition can usually be treated with insulin-sensitizing agents for a while, until they fail (those drugs will on average fail in something like ~50% of subjects within five years of diagnosis, which is the reason – combined with the much (order(/s, depending on where you are) of magnitude) higher prevalence of type 2 – why a majority of patients on insulin have type 2), and these tend to a) be more acceptable to the patients (a pill vs an injection) and b) have fewer/less severe side effects on average. One reason which also played a major role in delaying the necessary use of insulin to treat type 2 diabetes which could not be adequately controlled via other means was incidentally the fact that insulin ca[u]ses weight gain, and the obesity-type 2 link was well known.”

“Type 1 is autoimmune, and most cases of type 2 are not, but some forms of type 2 seem to have an autoimmune component as well (“the overall autoantibody frequency in type 2 patients varies between 6% and 10%” – source) (these patients, who can be identified through genetic markers, will on average proceed to insulin dependence because of treatment failure in the context of insulin sensitizing-agents much sooner than is usually the case in patients with type 2). In general type 1 is caused by autoimmune beta cell destruction and type 2 mainly by insulin resistance, but combinations of the two are also possible […], and patients with type 1 can develop insulin resistance just as patients with type 2 can lose beta cells via multiple pathways. The major point here being that the sharp diagnostic distinction between type 1 and type 2 is a major simplification of what’s really going on, and it’s hiding a lot of heterogeneity in both samples. Some patients with type 1 will develop diabetes acutely or subacutely, within days or hours, whereas others will have elevated blood glucose levels for months before medical attention is received and a diagnosis is made (you can tell whether or not blood glucose has been elevated pre-diagnosis by looking at one of the key diagnostic variables, Hba1c, which is a measure of the average blood glucose over the entire lifetime of a red blood cell (~3-4 months) – in some newly diagnosed type 1s, this variable is elevated, in others it is not. Some type 1 patients will develop other autoimmune conditions later on, whereas others will not, and some will be more likely to develop complications than others who have the same level of glycemic control.

Type 1 and type 2 diabetes are quite different conditions, but in terms of many aspects of the diseases there are significant degrees of overlap (for example they develop many of the same complications, for similar (pathophysiological) reasons), yet they are both called diabetes. You don’t want to treat a type 2 diabetic with insulin if he can be treated with metformin, and treating a type 1 with metformin will not help – so different treatments are required.”

“In terms of whether it’s ideal to have one autistic diagnostic group or two (…or three, or…) [this question was a starting point for the debate from which I quote, but I decided not to go much into this topic here], I maintain that to a significant extent the answer to that question relates to what the diagnosis is supposed to accomplish. If it makes sense for researchers to be able to distinguish, which it probably does, but it is not necessary for support organizers/providers to know the subtype in order to provide aid, then you might end up with one ‘official’ category and two (or more) ‘research categories’. I would be fine with that (but again I don’t find this discussion interesting). Again a parallel might be made to diabetes research: Endocrinologists are well aware that there’s a huge amount of variation in both the type 1 and type 2 samples, to the extent that it’s sort of silly to even categorize these illnesses using the same name, but they do it anyway for reasons which are sort of obvious. If you’re type 1 diabetic and you have an HLA mutation which made you vulnerable to diabetes and you developed diabetes at the age of 5, well, we’ll start you on insulin, try to help you achieve good metabolic control, and screen you regularly for complications. If on the other hand you’re an adult guy who due to a very different genetic vulnerability developed type 1 diabetes at the age of 30 (and later on Graves’ disease at the age of 40, due to the same mutation), well, we’ll start you on insulin, try to help you achieve good metabolic control, and screen you regularly for complications. The only thing type 1 diabetics have in common is the fact that their beta cells die due to some autoimmune processes. But it could easily be conceived of instead as literally hundreds of different diseases. Currently the distinctions between the different disease-relevant pathophysiological processes don’t matter very much in the treatment context, but they might do that at some point in the future, and if that happens the differences will start to become more important. People might at that point start to talk about type 1a diabetes, which might be the sort you can delay or stop with gene therapy, and type 1b which you can’t delay or stop (…yet). Lumping ‘different’ groups together into one diagnostic category is bad if it makes you overlook variation which is important, and this may be a problem in the autism context today, but regardless of the sizes of the diagnostic groups you’ll usually still end up with lots of residual (‘unexplained’) variation.”

I can’t recall to which extent I’ve discussed this last topic – the extent to which type 1 diabetes is best modeled as one illness or many – but it’s an important topic to keep at the back of your mind when you’re reading the diabetes literature. I’m assuming that in some contexts the subgroup heterogeneities, e.g. in terms of treatment response, will be much more important than in other contexts, so you probably need specific subject matter knowledge to make any sort of informed decision about to which extent potential unobserved heterogeneities may be important in a specific setting, but even if you don’t have that ‘a healthy skepticism’, derived from keeping the potential for these factors to play a role in mind, is likely to be more useful than the alternative. In that context I think the (poor, but understandable) standard practice of lumping together type 1 and type 2 diabetics in studies may lead many people familiar with the differences between the two conditions to think along the lines that as long as you know the type, you’re good to go – ‘at least this study only looked at type 1 individuals, not like those crappy studies which do not distinguish between type 1 and type 2, so I can definitely trust these results to apply to the subgroup of type 1 diabetics in which I’m interested’ – and I think this tendency, to the extent that it exists, is unfortunate.

July 8, 2017 Posted by | autism, Diabetes, Epidemiology, Genetics, Medicine, Psychology | Leave a comment

The Personality Puzzle (II)

I have added some more quotes and observations from the book below. Some of the stuff covered in this post is very closely related to material I’ve previously covered on the blog, e.g. here and here, but I didn’t mind reviewing this stuff here. If you’re already familiar with Funder’s RAM model of personality judgment you can probably skip the last half of the post without missing out on anything.

“[T]he trait approach [of personality psychology] focuses exclusively on individual differences. It does not attempt to measure how dominant, sociable, or nervous anybody is in an absolute sense; there is no zero point on any dominance scale or on any measure of any other trait. Instead, the trait approach seeks to measure the degree to which a person might be more or less dominant, sociable, or nervous than someone else. (Technically, therefore, trait measurements are made on ordinal rather than ratio scales.) […] Research shows that the stability of the differences between people increases with age […] According to one major summary of the literature, the correlation coefficient reflecting consistency of individual differences in personality is .31 across childhood, .54 during the college years, and .74 between the ages of 50 and 70 […] The main reason personality becomes more stable during the transition from child to adult to senior citizen seems to be that one’s environment also gets more stable with age […] According to one major review, longitudinal data show that, on average, people tend to become more socially dominant, agreeable, conscientious, and emotionally stable (lower on neuroticism) over time […] [However] people differ from each other in the degree to which they have developed a consistent personality […] Several studies suggest that the consistency of personality is associated with maturity and general mental health […] More-consistent people appear to be less neurotic, more controlled, more mature, and more positive in their relations with others (Donnellan, Conger, & Burzette, 2007; Roberts, Caspi, & Mofftt, 2001; Sherman, Nave, & Funder, 2010).”

“Despite the evidence for the malleability of personality […], it would be a mistake to conclude that change is easy. […] most people like their personalities pretty much the way they are, and do not see any reason for drastic change […] Acting in a way contrary to one’s traits takes effort and can be exhausting […] Second, people have a tendency to blame negative experiences and failures on external forces rather than recognizing the role of their own personality. […] Third, people generally like their lives to be consistent and predictable […] Change requires learning new skills, going new places, meeting new people, and acting in unaccustomed ways. That can make it uncomfortable. […] personality change has both a downside and an upside. […] people tend to like others who are “judgeable,” who are easy to understand, predict, and relate to. But when they don’t know what to expect or how to predict what a person will do, they are more likely to avoid that person. […] Moreover, if one’s personality is constantly changing, then it will be difficult to choose consistent goals that can be pursued over the long term.”

“There is no doubt that people change their behavior from one situation to the next. This obvious fact has sometimes led to the misunderstanding that personality consistency somehow means “acting the same way all the time.” But that’s not what it means at all. […] It is individual differences in behavior that are maintained across situations, not how much a behavior is performed. […] as the effect of the situation gets stronger, the effect of the person tends to get weaker, and vice versa. […] any fair reading of the research literature make one thing abundantly clear: When it comes to personality, one size does not fit all. People really do act differently from each other. Even when they are all in the same situation, some individuals will be more sociable, nervous, talkative, or active than others. And when the situation changes, those differences will still be there […] the evidence is overwhelming that people are psychologically different from one another, that personality traits exist, that people’s impressions of each other’s personalities are based on reality more than cognitive error, and that personality traits affect important life outcomes […] it is […] important to put the relative role of personality traits and situations into perspective. Situational variables are relevant to how people will act under specific circumstances. Personality traits are better for describing how people act in general […] A sad legacy of the person-situation debate is that many psychologists became used to thinking of the person and the situation as opposing forces […] It is much more accurate to see persons and situations as constantly interacting to produce behavior together. […] Persons and situations interact in three major ways […] First, the effect of a personality variable may depend on the situation, or vice versa. […] Certain types of people go to or find themselves in different types of situations. This is the second kind of person-situation interaction. […] The third kind of interaction stems from the way people change situations by virtue of what they do in them”.

“Shy people are often lonely and may deeply wish to have friends and normal social interactions, but are so fearful of the process of social involvement that they become isolated. In some cases, they won’t ask for help when they need it, even when someone who could easily solve their problem is nearby […]. Because shy people spend a lot of time by themselves, they deny themselves the opportunity to develop normal social skills. When they do venture out, they are so out of practice they may not know how to act. […] A particular problem for shy people is that, typically, others do not perceive them as shy. Instead, to most observers, they seem cold and aloof. […] shy people generally are not cold and aloof, or at least they do not mean to be. But that is frequently how they are perceived. That perception, in turn, affects the lives of shy people in important negative ways and is part of a cycle that perpetuates shyness […] the judgments of others are an important part of the social world and can have a significant effect on personality and life. […] Judgments of others can also affect you through “self-fulfilling prophecies,” more technically known as expectancy effects.1 These effects can affect both intellectual performance and social behavior.”

“Because people constantly make personality judgments, and because these judgments are consequential, it would seem important to know when and to what degree these judgments are accurate. […] [One relevant] method is called convergent validation. […] Convergent validation is achieved by assembling diverse pieces of information […] that “converge” on a common conclusion […] The more items of diverse information that converge, the more confident the conclusion […] For personality judgments, the two primary converging criteria are interjudge agreement and behavioral prediction. […] psychological research can evaluate personality judgments by asking two questions […] (1) Do the judgments agree with one another? (2) Can they predict behavior? To the degree the answers are Yes, the judgments are probably accurate.”

“In general, judges [of personality] will reach more accurate conclusions if the behaviors they observe are closely related to the traits they are judging. […] A moderator of accuracy […] is a variable that changes the correlation between a judgment and its criterion. Research on accuracy has focused primarily on four potential moderators: properties (1) of the judge, (2) of the target (the person who is judged), (3) of the trait that is judged, and (4) of the information on which the judgment is based. […] Do people know whether they are good judges of personality? The answer appears to be both no and yes […]. No, because people who describe themselves as good judges, in general, are no better than those who rate themselves as poorer in judgmental ability. But the answer is yes, in another sense. When asked which among several acquaintances they can judge most accurately, most people are mostly correct. In other words, we can tell the difference between people who we can and cannot judge accurately. […] Does making an extra effort to be accurate help? Research results so far are mixed.”

“When it comes to accurate judgment, who is being judged might be even more important than who is doing the judging. […] People differ quite a lot in how accurately they can be judged. […] “Judgable” people are those about whom others reach agreement most easily, because they are the ones whose behavior is most predictable from judgments of their personalities […] The behavior of judgable people is organized coherently; even acquaintances who know them in separate settings describe essentially the same person. Furthermore, the behavior of such people is consistent; what they do in the future can be predicted from what they have done in the past. […] Theorists have long postulated that it is psychologically healthy to conceal as little as possible from those around you […]. If you exhibit a psychological façade that produces large discrepancies between the person “inside” and the person you display “outside,” you may feel isolated from the people around you, which can lead to unhappiness, hostility, and depression. Acting in a way that is contrary to your real personality takes effort, and can be psychologically tiring […]. Evidence even suggests that concealing your emotions may be harmful to physical health“.

“All traits are not created equal — some are much easier to judge accurately than others. For example, more easily observed traits, such as “talkativeness,” “sociability,” and other traits related to extraversion, are judged with much higher levels of interjudge agreement than are less visible traits, such as cognitive and ruminative styles and habits […] To find out about less visible, more internal traits like beliefs or tendencies to worry, self-reports […] are more informative […] [M]ore information is usually better, especially when judging certain traits. […] Quantity is not the only important variable concerning information. […] it can be far more informative to observe a person in a weak situation, in which different people do different things, than in a strong situation, in which social norms restrict what people do […] The best situation for judging someone’s personality is one that brings out the trait you want to judge. To evaluate a person’s approach toward his work, the best thing to do is to observe him working. To evaluate a person’s sociability, observations at a party would be more informative […] The accurate judgment of personality, then, depends on both the quantity and the quality of the information on which it is based. More information is generally better, but it is just as important for the information to be relevant to the traits that one is trying to judge.”

“In order to get from an attribute of an individual’s personality to an accurate judgment of that trait, four things must happen […]. First, the person being judged must do something relevant; that is, informative about the trait to be judged. Second, this information must be available to a judge. Third, this judge must detect this information. Fourth and fnally, the judge must utilize this information correctly. […] If the process fails at any step — the person in question never does something relevant, or does it out of sight of the judge, or the judge doesn’t notice, or the judge makes an incorrect interpretation — accurate personality judgment will fail. […] Traditionally, efforts to improve accuracy have focused on attempts to get judges to think better, to use good logic and avoid inferential errors. These efforts are worthwhile, but they address only one stage — utilization — out of the four stages of accurate personality judgment. Improvement could be sought at the other stages as well […] Becoming a better judge of personality […] involves much more than “thinking better.” You should also try to create an interpersonal environment where other people can be themselves and where they feel free to let you know what is really going on.”

July 5, 2017 Posted by | Books, Psychology | Leave a comment

A few diabetes papers of interest

i. An Inverse Relationship Between Age of Type 2 Diabetes Onset and Complication Risk and Mortality: The Impact of Youth-Onset Type 2 Diabetes.

“This study compared the prevalence of complications in 354 patients with T2DM diagnosed between 15 and 30 years of age (T2DM15–30) with that in a duration-matched cohort of 1,062 patients diagnosed between 40 and 50 years (T2DM40–50). It also examined standardized mortality ratios (SMRs) according to diabetes age of onset in 15,238 patients covering a wider age-of-onset range.”

“After matching for duration, despite their younger age, T2DM15–30 had more severe albuminuria (P = 0.004) and neuropathy scores (P = 0.003). T2DM15–30 were as commonly affected by metabolic syndrome factors as T2DM40–50 but less frequently treated for hypertension and dyslipidemia (P < 0.0001). An inverse relationship between age of diabetes onset and SMR was seen, which was the highest for T2DM15–30 (3.4 [95% CI 2.7–4.2]). SMR plots adjusting for duration show that for those with T2DM15–30, SMR is the highest at any chronological age, with a peak SMR of more than 6 in early midlife. In contrast, mortality for older-onset groups approximates that of the background population.”

“Young people with type 2 diabetes are likely to be obese, with a clustering of unfavorable cardiometabolic risk factors all present at a very early age (3,4). In adolescents with type 2 diabetes, a 10–30% prevalence of hypertension and an 18–54% prevalence of dyslipidemia have been found, much greater than would be expected in a population of comparable age (4).”

CONCLUSIONS The negative effect of diabetes on morbidity and mortality is greatest for those diagnosed at a young age compared with T2DM of usual onset.”

It’s important to keep base rates in mind when interpreting the reported SMRs, but either way this is interesting.

ii. Effects of Sleep Deprivation on Hypoglycemia-Induced Cognitive Impairment and Recovery in Adults With Type 1 Diabetes.

OBJECTIVE To ascertain whether hypoglycemia in association with sleep deprivation causes greater cognitive dysfunction than hypoglycemia alone and protracts cognitive recovery after normoglycemia is restored.”

CONCLUSIONS Hypoglycemia per se produced a significant decrement in cognitive function; coexisting sleep deprivation did not have an additive effect. However, after restoration of normoglycemia, preceding sleep deprivation was associated with persistence of hypoglycemic symptoms and greater and more prolonged cognitive dysfunction during the recovery period. […] In the current study of young adults with type 1 diabetes, the impairment of cognitive function that was associated with hypoglycemia was not exacerbated by sleep deprivation. […] One possible explanation is that hypoglycemia per se exerts a ceiling effect on the degree of cognitive dysfunction as is possible to demonstrate with conventional tests.”

iii. Intensive Diabetes Treatment and Cardiovascular Outcomes in Type 1 Diabetes: The DCCT/EDIC Study 30-Year Follow-up.

“The DCCT randomly assigned 1,441 patients with type 1 diabetes to intensive versus conventional therapy for a mean of 6.5 years, after which 93% were subsequently monitored during the observational Epidemiology of Diabetes Interventions and Complications (EDIC) study. Cardiovascular disease (nonfatal myocardial infarction and stroke, cardiovascular death, confirmed angina, congestive heart failure, and coronary artery revascularization) was adjudicated using standardized measures.”

“During 30 years of follow-up in DCCT and EDIC, 149 cardiovascular disease events occurred in 82 former intensive treatment group subjects versus 217 events in 102 former conventional treatment group subjects. Intensive therapy reduced the incidence of any cardiovascular disease by 30% (95% CI 7, 48; P = 0.016), and the incidence of major cardiovascular events (nonfatal myocardial infarction, stroke, or cardiovascular death) by 32% (95% CI −3, 56; P = 0.07). The lower HbA1c levels during the DCCT/EDIC statistically account for all of the observed treatment effect on cardiovascular disease risk.”

CONCLUSIONS Intensive diabetes therapy during the DCCT (6.5 years) has long-term beneficial effects on the incidence of cardiovascular disease in type 1 diabetes that persist for up to 30 years.”

I was of course immediately thinking that perhaps they had not considered if this might just be the result of the Hba1c differences achieved during the trial being maintained long-term (during follow-up), and so what they were doing was not as much measuring the effect of the ‘metabolic memory’ component as they were just measuring standard population outcome differences resulting from long-term Hba1c differences. But they (of course) had thought about that, and that’s not what’s going on here, which is what makes it particularly interesting:

“Mean HbA1c during the average 6.5 years of DCCT intensive therapy was ∼2% (20 mmol/mol) lower than that during conventional therapy (7.2 vs. 9.1% [55.6 vs. 75.9 mmol/mol], P < 0.001). Subsequently during EDIC, HbA1c differences between the treatment groups dissipated. At year 11 of EDIC follow-up and most recently at 19–20 years of EDIC follow-up, there was only a trivial difference between the original intensive and conventional treatment groups in the mean level of HbA1c

They do admittedly find a statistically significant difference between the Hba1cs of the two groups when you look at (weighted) Hba1cs long-term, but that difference is certainly nowhere near large enough to explain the clinical differences in outcomes you observe. Another argument in favour of the view that what’s driving these differences is metabolic memory is the observation that the difference in outcomes between the treatment and control groups are smaller now than they were ten years ago (my default would probably be to if anything expect the outcomes of the two groups to converge long-term if the samples were properly randomized to start with, but this is not the only plausible model and it sort of depends on how you model the risk function, as they also talk about in the paper):

“[T]he risk reduction of any CVD with intensive therapy through 2013 is now less than that reported previously through 2004 (30% [P = 0.016] vs. 47% [P = 0.005]), and likewise, the risk reduction per 10% lower mean HbA1c through 2013 was also somewhat lower than previously reported but still highly statistically significant (17% [P = 0.0001] vs. 20% [P = 0.001]).”

iv. Commonly Measured Clinical Variables Are Not Associated With Burden of Complications in Long-standing Type 1 Diabetes: Results From the Canadian Study of Longevity in Diabetes.

“The Canadian Study of Longevity in Diabetes actively recruited 325 individuals who had T1D for 50 or more years (5). Subjects completed a questionnaire, and recent laboratory tests and eye reports were provided by primary care physicians and eye specialists, respectively. […] The 325 participants were 65.5 ± 8.5 years old with diagnosis at age 10 years (interquartile range [IQR] 6.0, 16) and duration of 54.9 ± 6.4 years.”

“In univariable analyses, the following were significantly associated with a greater burden of complications: presence of hypertension, statin, aspirin and ACE inhibitor or ARB use, higher Problem Areas in Diabetes (PAID) and Geriatric Depression Scale (GDS) scores, and higher levels of triglycerides and HbA1c. The following were significantly associated with a lower burden of complications: current physical activity, higher quality of life, and higher HDL cholesterol.”

“In the multivariable analysis, a higher PAID score was associated with a greater burden of complications (risk ratio [RR] 1.15 [95% CI 1.06–1.25] for each 10-point-higher score). Aspirin and statin use were also associated with a greater burden of complications (RR 1.24 [95% CI 1.01–1.52] and RR 1.34 [95% CI 1.05–1.70], respectively) (Table 1), whereas HbA1c was not.”

“Our findings indicate that in individuals with long-standing T1D, burden of complications is largely not associated with historical characteristics or simple objective measurements, as associations with statistical significance likely reflect reverse causality. Notably, HbA1c was not associated with burden of complications […]. This further confirms that other unmeasured variables such as genetic, metabolic, or physiologic characteristics may best identify mechanisms and biomarkers of complications in long-standing T1D.”

v. Cardiovascular Risk Factor Targets and Cardiovascular Disease Event Risk in Diabetes: A Pooling Project of the Atherosclerosis Risk in Communities Study, Multi-Ethnic Study of Atherosclerosis, and Jackson Heart Study.

“Controlling cardiovascular disease (CVD) risk factors in diabetes mellitus (DM) reduces the number of CVD events, but the effects of multifactorial risk factor control are not well quantified. We examined whether being at targets for blood pressure (BP), LDL cholesterol (LDL-C), and glycated hemoglobin (HbA1c) together are associated with lower risks for CVD events in U.S. adults with DM. […] We studied 2,018 adults, 28–86 years of age with DM but without known CVD, from the Atherosclerosis Risk in Communities (ARIC) study, Multi-Ethnic Study of Atherosclerosis (MESA), and Jackson Heart Study (JHS). Cox regression examined coronary heart disease (CHD) and CVD events over a mean 11-year follow-up in those individuals at BP, LDL-C, and HbA1c target levels, and by the number of controlled risk factors.”

“Of 2,018 DM subjects (43% male, 55% African American), 41.8%, 32.1%, and 41.9% were at target levels for BP, LDL-C, and HbA1c, respectively; 41.1%, 26.5%, and 7.2% were at target levels for any one, two, or all three factors, respectively. Being at BP, LDL-C, or HbA1c target levels related to 17%, 33%, and 37% lower CVD risks and 17%, 41%, and 36% lower CHD risks, respectively (P < 0.05 to P < 0.0001, except for BP in CHD risk); those subjects with one, two, or all three risk factors at target levels (vs. none) had incrementally lower adjusted risks of CVD events of 36%, 52%, and 62%, respectively, and incrementally lower adjusted risks of CHD events of 41%, 56%, and 60%, respectively (P < 0.001 to P < 0.0001). Propensity score adjustment showed similar findings.”

“In our pooled analysis of subjects with DM in three large-scale U.S. prospective studies, the more factors among HbA1c, BP, and LDL-C that were at goal levels, the lower are the observed CHD and CVD risks (∼60% lower when all three factors were at goal levels compared with none). However, fewer than one-tenth of our subjects were at goal levels for all three factors. These findings underscore the value of achieving target or lower levels of these modifiable risk factors, especially in combination, among persons with DM for the future prevention of CHD and CVD events.”

In some studies you see very low proportions of patients reaching target variables because the targets are stupid (to be perfectly frank about it). The HbA1c target applied in this study was a level <53.0 mmol/mol (7%), which is definitely not crazy if the majority of the individuals included were type 2, which they almost certainly were. You can argue about the BP goal, but it’s obvious here that the authors are perfectly aware of the contentiousness of this variable.

It’s incidentally noteworthy – and the authors do take note of it, of course – that one of the primary results of this study (~60% lower risk when all risk factors reach the target goal), which includes a large proportion of African Americans in the study sample, is almost identical to the results of the Danish Steno-2 clinical trial, which included only Danish white patients (and the results of which I have discussed here on the blog before). In the Steno study, the result was “a 57% reduction in CVD death and a 59% reduction in CVD events.”

vi. Illness Identity in Adolescents and Emerging Adults With Type 1 Diabetes: Introducing the Illness Identity Questionnaire.

“The current study examined the utility of a new self-report questionnaire, the Illness Identity Questionnaire (IIQ), which assesses the concept of illness identity, or the degree to which type 1 diabetes is integrated into one’s identity. Four illness identity dimensions (engulfment, rejection, acceptance, and enrichment) were validated in adolescents and emerging adults with type 1 diabetes. Associations with psychological and diabetes-specific functioning were assessed.”

“A sample of 575 adolescents and emerging adults (14–25 years of age) with type 1 diabetes completed questionnaires on illness identity, psychological functioning, diabetes-related problems, and treatment adherence. Physicians were contacted to collect HbA1c values from patients’ medical records. Confirmatory factor analysis (CFA) was conducted to validate the IIQ. Path analysis with structural equation modeling was used to examine associations between illness identity and psychological and diabetes-specific functioning.”

“The first two identity dimensions, engulfment and rejection, capture a lack of illness integration, or the degree to which having diabetes is not well integrated as part of one’s sense of self. Engulfment refers to the degree to which diabetes dominates a person’s identity. Individuals completely define themselves in terms of their diabetes, which invades all domains of life (9). Rejection refers to the degree to which diabetes is rejected as part of one’s identity and is viewed as a threat or as unacceptable to the self. […] Acceptance refers to the degree to which individuals accept diabetes as a part of their identity, besides other social roles and identity assets. […] Enrichment refers to the degree to which having diabetes results in positive life changes, benefits one’s identity, and enables one to grow as a person (12). […] These changes can manifest themselves in different ways, including an increased appreciation for life, a change of life priorities, and a more positive view of the self (14).”

“Previous quantitative research assessing similar constructs has suggested that the degree to which individuals integrate their illness into their identity may affect psychological and diabetes-specific functioning in patients. Diabetes intruding upon all domains of life (similar to engulfment) [has been] related to more depressive symptoms and more diabetes-related problems […] In contrast, acceptance has been related to fewer depressive symptoms and diabetes-related problems and to better glycemic control (6,15). Similarly, benefit finding has been related to fewer depressive symptoms and better treatment adherence (16). […] The current study introduces the IIQ in individuals with type 1 diabetes as a way to assess all four illness identity dimensions.”

“The Cronbach α was 0.90 for engulfment, 0.84 for rejection, 0.85 for acceptance, and 0.90 for enrichment. […] CFA indicated that the IIQ has a clear factor structure, meaningfully differentiating four illness identity dimensions. Rejection was related to worse treatment adherence and higher HbA1c values. Engulfment was related to less adaptive psychological functioning and more diabetes-related problems. Acceptance was related to more adaptive psychological functioning, fewer diabetes-related problems, and better treatment adherence. Enrichment was related to more adaptive psychological functioning. […] the concept of illness identity may help to clarify why certain adolescents and emerging adults with diabetes show difficulties in daily functioning, whereas others succeed in managing developmental and diabetes-specific challenges.”

June 30, 2017 Posted by | Cardiology, Diabetes, Medicine, Psychology, Studies | Leave a comment

The Personality Puzzle (I)

I don’t really like this book, which is a personality psychology introductory textbook by David Funder. I’ve read the first 400 pages (out of 700), but I’m still debating whether or not to finish it, it just isn’t very good; the level of coverage is low, it’s very fluffy and the signal-to-noise ratio is nowhere near where I’d like it to be when I’m reading academic texts. Some parts of it frankly reads like popular science. However despite not feeling that the book is all that great I can’t justify not blogging it; stuff I don’t blog I tend to forget, and if I’m reading a mediocre textbook anyway I should at least try to pick out some of the decent stuff in there which keeps me reading and try to make it easier for me to recall that stuff later. Some parts of- and arguments/observations included in the book are in my opinion just plain silly or stupid, but I won’t go into these things in this post because I don’t really see what would be the point of doing that.

The main reason why I decided to give the book a go was that I liked Funder’s book Personality Judgment, which I read a few years ago and which deals with some topics also covered superficially in this text – it’s a much better book, in my opinion, at least as far as I can remember (…I have actually been starting to wonder if it was really all that great, if it was written by the same guy who wrote this book…), if you’re interested in these matters. If you’re interested in a more ‘pure’ personality psychology text, a significantly better alternative is Leary et al.‘s Handbook of Individual Differences in Social Behavior. Because of the multi-author format it also includes some very poor chapters, but those tend to be somewhat easy to identify and skip to get to the good stuff if you’re so inclined, and the general coverage is at a much higher level than that of this book.

Below I have added some quotes and observations from the first 150 pages of the book.

“A theory that accounts for certain things extremely well will probably not explain everything else so well. And a theory that tries to explain almost everything […] would probably not provide the best explanation for any one thing. […] different [personality psychology] basic approaches address different sets of questions […] each basic approach usually just ignores the topics it is not good at explaining.”

Personality psychology tends to emphasize how individuals are different from one another. […] Other areas of psychology, by contrast, are more likely to treat people as if they were the same or nearly the same. Not only do the experimental subfields of psychology, such as cognitive and social psychology, tend to ignore how people are different from each other, but also the statistical analyses central to their research literally put individual differences into their “error” terms […] Although the emphasis of personality psychology often entails categorizing and labeling people, it also leads the field to be extraordinarily sensitive — more than any other area of psychology — to the fact that people really are different.”

“If you want to “look at” personality, what do you look at, exactly? Four different things. First, and perhaps most obviously, you can have the person describe herself. Personality psychologists often do exactly this. Second, you can ask people who know the person to describe her. Third, you can check on how the person is faring in life. And finally, you can observe what the person does and try to measure her behavior as directly and objectively as possible. These four types of clues can be called S [self-judgments], I [informants], L [life], and B [behavior] data […] The point of the four-way classification […] is not to place every kind of data neatly into one and only one category. Rather, the point is to illustrate the types of data that are relevant to personality and to show how they all have both advantages and disadvantages.”

“For cost-effectiveness, S data simply cannot be beat. […] According to one analysis, 70 percent of the articles in an important personality journal were based on self-report (Vazire, 2006).”

“I data are judgments by knowledgeable “informants” about general attributes of the individual’s personality. […] Usually, close acquaintanceship paired with common sense is enough to allow people to make judgments of each other’s attributes with impressive accuracy […]. Indeed, they may be more accurate than self-judgments, especially when the judgments concern traits that are extremely desirable or extremely undesirable […]. Only when the judgments are of a technical nature (e.g., the diagnosis of a mental disorder) does psychological education become relevant. Even then, acquaintances without professional training are typically well aware when someone has psychological problems […] psychologists often base their conclusions on contrived tests of one kind or another, or on observations in carefully constructed and controlled environments. Because I data derive from behaviors informants have seen in daily social interactions, they enjoy an extra chance of being relevant to aspects of personality that affect important life outcomes. […] I data reflect the opinions of people who interact with the person every day; they are the person’s reputation. […] personality judgments can [however] be [both] unfair as well as mistaken […] The most common problem that arises from letting people choose their own informants — the usual practice in research — may be the “letter of recommendation effect” […] research participants may tend to nominate informants who think well of them, leading to I data that provide a more positive picture than might have been obtained from more neutral parties.”

“L data […] are verifable, concrete, real-life facts that may hold psychological significance. […] An advantage of using archival records is that they are not prone to the potential biases of self-report or the judgments of others. […] [However] L data have many causes, so trying to establish direct connections between specific attributes of personality and life outcomes is chancy. […] a psychologist can predict a particular outcome from psychological data only to the degree that the outcome is psychologically caused. L data often are psychologically caused only to a small degree.”

“The idea of B data is that participants are found, or put, in some sort of a situation, sometimes referred to as a testing situation, and then their behavior is directly observed. […] B data are expensive [and] are not used very often compared to the other types. Relatively few psychologists have the necessary resources.”

“Reliable data […] are measurements that reflect what you are trying to assess and are not affected by anything else. […] When trying to measure a stable attribute of personality—a trait rather than a state — the question of reliability reduces to this: Can you get the same result more than once? […] Validity is the degree to which a measurement actually reflects what one thinks or hopes it does. […] for a measure to be valid, it must be reliable. But a reliable measure is not necessarily valid. […] A measure that is reliable gives the same answer time after time. […] But even if a measure is the same time after time, that does not necessarily mean it is correct.”

“[M]ost personality tests provide S data. […] Other personality tests yield B data. […] IQ tests […] yield B data. Imagine trying to assess intelligence using an S-data test, asking questions such as “Are you an intelligent person?” and “Are you good at math?” Researchers have actually tried this, but simply asking people whether they are smart turns out to be a poor way to measure intelligence”.

“The answer an individual gives to any one question might not be particularly informative […] a single answer will tend to be unreliable. But if a group of similar questions is asked, the average of the answers ought to be much more stable, or reliable, because random fluctuations tend to cancel each other out. For this reason, one way to make a personality test more reliable is simply to make it longer.”

“The factor analytic method of test construction is based on a statistical technique. Factor analysis identifies groups of things […] that seem to have something in common. […] To use factor analysis to construct a personality test, researchers begin with a long list of […] items […] The next step is to administer these items to a large number of participants. […] The analysis is based on calculating correlation coefficients between each item and every other item. Many items […] will not correlate highly with anything and can be dropped. But the items that do correlate with each other can be assembled into groups. […] The next steps are to consider what the items have in common, and then name the factor. […] Factor analysis has been used not only to construct tests, but also to decide how many fundamental traits exist […] Various analysts have come up with different answers.”

[The Big Five were derived from factor analyses.]

The empirical strategy of test construction is an attempt to allow reality to speak for itself. […] Like the factor analytic approach described earlier, the frst step of the empirical approach is to gather lots of items. […] The second step, however, is quite different. For this step, you need to have a sample of participants who have already independently been divided into the groups you are interested in. Occupational groups and diagnostic categories are often used for this purpose. […] Then you are ready for the third step: administering your test to your participants. The fourth step is to compare the answers given by the different groups of participants. […] The basic assumption of the empirical approach […] is that certain kinds of people answer certain questions on personality inventories in distinctive ways. If you answer questions the same way as members of some occupational or diagnostic group did in the original derivation study, then you might belong to that group too. […] responses to empirically derived tests are difficult to fake. With a personality test of the straightforward, S-data variety, you can describe yourself the way you want to be seen, and that is indeed the score you will get. But because the items on empirically derived scales sometimes seem backward or absurd, it is difficult to know how to answer in such a way as to guarantee the score you want. This is often held up as one of the great advantages of the empirical approach […] [However] empirically derived tests are only as good as the criteria by which they are developed or against which they are cross-validated. […] the empirical correlates of item responses by which these tests are assembled are those found in one place, at one time, with one group of participants. If no attention is paid to item content, then there is no way to be confident that the test will work in a similar manner at another time, in another place, with different participants. […] A particular concern is that the empirical correlates of item response might change over time. The MMPI was developed decades ago and has undergone a major revision only once”.

“It is not correct, for example, that the significance level provides the probability that the substantive (non-null) hypothesis is true. […] the significance level gives the probability of getting the result one found if the null hypothesis were true. One statistical writer offered the following analogy (Dienes, 2011): The probability that a person is dead, given that a shark has bitten his head off, is 1.0. However, the probability that a person’s head was bitten off by a shark, given that he is dead, is much lower. The probability of the data given the hypothesis, and of the hypothesis given the data, is not the same thing. And the latter is what we really want to know. […] An effect size is more meaningful than a significance level. […] It is both facile and misleading to use the frequently taught method of squaring correlations if the intention is to evaluate effect size.”

June 30, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

A few papers

i. To Conform or to Maintain Self-Consistency? Hikikomori Risk in Japan and the Deviation From Seeking Harmony.

A couple of data points and observations from the paper:

“There is an increasing number of youth in Japan who are dropping out of society and isolating themselves in their bedrooms from years to decades at a time. According to Japan’s Ministry of Health, Labor and Welfare’s first official 2003 guidelines on this culture-bound syndrome, hikikomori (social isolation syndrome) has the following specific diagnostic criteria: (1) no motivation to participate in school or work; (2) no signs of schizophrenia or any other known psychopathologies; and (3) persistence of social withdrawal for at least six months.”

“One obvious dilemma in studying hikikomori is that most of those suffering from hikikomori, by definition, do not seek treatment. More importantly, social isolation itself is not even a symptom of any of the DSM diagnosis often assigned to an individual afflicted with hikikomori […] The motivation for isolating oneself among a hikikomori is simply to avoid possible social interactions with others who might know or judge them (Zielenziger, 2006).”

“Saito’s (2010) and Sakai and colleagues’ (2011) data suggest that 10% to 15% of the hikikomori population suffer from an autism spectrum disorder. […] in the first epidemiological study conducted on hikikomori that was as close to a nation-wide random sample as possible, Koyama and colleagues (2010) conducted a face-to-face household survey, including a structured diagnostic interview, by randomly picking households and interviewing 4,134 individuals. They confirmed a hikikomori lifetime prevalence rate of 1.2% in their nationwide sample. Among these hikikomori individuals, the researchers found that only half suffered from a DSM-IV diagnosis. However, and more importantly, there was no particular diagnosis that was systematically associated with hikikomori. […] the researchers concluded that any DSM diagnosis was an epiphenomenon to hikikomori at best and that hikikomori is rather a “psychopathology characterized by impaired motivation” p. 72).”

ii. Does the ‘hikikomori’ syndrome of social withdrawal exist outside Japan?: A preliminary international investigation.

Purpose

To explore whether the ‘hikikomori’ syndrome (social withdrawal) described in Japan exists in other countries, and if so, how patients with the syndrome are diagnosed and treated.

Methods

Two hikikomori case vignettes were sent to psychiatrists in Australia, Bangladesh, India, Iran, Japan, Korea, Taiwan, Thailand and the USA. Participants rated the syndrome’s prevalence in their country, etiology, diagnosis, suicide risk, and treatment.

Results

Out of 247 responses to the questionnaire (123 from Japan and 124 from other countries), 239 were enrolled in the analysis. Respondents’ felt the hikikomori syndrome is seen in all countries examined and especially in urban areas. Biopsychosocial, cultural, and environmental factors were all listed as probable causes of hikikomori, and differences among countries were not significant. Japanese psychiatrists suggested treatment in outpatient wards and some did not think that psychiatric treatment is necessary. Psychiatrists in other countries opted for more active treatment such as hospitalization.

Conclusions

Patients with the hikikomori syndrome are perceived as occurring across a variety of cultures by psychiatrists in multiple countries. Our results provide a rational basis for study of the existence and epidemiology of hikikomori in clinical or community populations in international settings.”

“Our results extend rather than clarify the debate over diagnosis of hikikomori. In our survey, a variety of diagnoses, such as psychosis, depression anxiety and personality disorders, were proffered. Opinions as to whether hikikomori cases can be diagnosed using ICD-10/DSV-IV criteria differed depending on the participants’ countries and the cases’ age of onset. […] a recent epidemiological survey in Japan reported approximately a fifty-fifty split between hikikomori who had experienced a psychiatric disorder and had not [14]. These data and other studies that have not been able to diagnose all cases of hikikomori may suggest the existence of ‘primary hikikomori’ that is not an expression of any other psychiatric disorder [28,8,9,5,29]. In order to clarify differences between ‘primary hikikomori’ (social withdrawal not associated with any underlying psychiatric disorder) and ‘secondary hikikomori’ (social withdrawal caused by an established psychiatric disorder), further epidemiological and psychopathological studies are needed. […] Even if all hikikomori cases prove to be within some kind of psychiatric disorders, it is valuable to continue to focus on the hikikomori phenomenon because of its associated morbidity, similar to how suicidality is examined in various fields of psychiatry [30]. Reducing the burden of hikikomori symptoms, regardless of what psychiatric disorders patients may have, may provide a worthwhile improvement in their quality of life, and this suggests another direction of future hikikomori research.”

“Our case vignette survey indicates that the hikikomori syndrome, previously thought to exist only in Japan, is perceived by psychiatrists to exist in many other countries. It is particularly perceived as occurring in urban areas and might be associated with rapid global sociocultural changes. There is no consensus among psychiatrists within or across countries about the causes, diagnosis and therapeutic interventions for hikikomori yet.”

iii. Hikikomori: clinical and psychopathological issues (review). A poor paper, but it did have a little bit of data of interest:

“The prevalence of hikikomori is difficult to assess […]. In Japan, more than one million cases have been estimated by experts, but there is no population-based study to confirm these data (9). […] In 2008, Kiyota et al. summarized 3 population-based studies involving 12 cities and 3951 subjects, highlighting that a percentage comprised between 0.9% and 3.8% of the sample had an hikikomori history in anamnesis (11). The typical hikikomori patient is male (4:1 male-to-female ratio) […] females constitute a minor fraction of the reported cases, and usually their period of social isolation is limited.”

iv. Interpreting results of ethanol analysis in postmortem specimens: A review of the literature.

A few observations from the paper:

“A person’s blood-alcohol concentration (BAC) and state of inebriation at the time of death is not always easy to establish owing to various postmortem artifacts. The possibility of alcohol being produced in the body after death, e.g. via microbial contamination and fermentation is a recurring issue in routine casework. If ethanol remains unabsorbed in the stomach at the time of death, this raises the possibility of continued local diffusion into surrounding tissues and central blood after death. Skull trauma often renders a person unconscious for several hours before death, during which time the BAC continues to decrease owing to metabolism in the liver. Under these circumstances blood from an intracerebral or subdural clot is a useful specimen for determination of ethanol. Bodies recovered from water are particular problematic to deal with owing to possible dilution of body fluids, decomposition, and enhanced risk of microbial synthesis of ethanol. […] Alcoholics often die at home with zero or low BAC and nothing more remarkable at autopsy than a fatty liver. Increasing evidence suggests that such deaths might be caused by a pronounced ketoacidosis.”

“The concentrations of ethanol measured in blood drawn from different sampling sites tend to vary much more than expected from inherent variations in the analytical methods used [49]. Studies have shown that concentrations of ethanol and other drugs determined in heart blood are generally higher than in blood from a peripheral vein although in any individual case there are likely to be considerable variations [50–53].”

“The BAC necessary to cause death is often an open question and much depends on the person’s age, drinking experience and degree of tolerance development [78]. The speed of drinking plays a role in alcohol toxicity as does the kind of beverage consumed […] Drunkenness and hypothermia represent a dangerous combination and deaths tend to occur at a lower BAC when people are exposed to cold, such as, when an alcoholic sleeps outdoors in the winter months [78]. Drinking large amounts of alcohol to produce stupor and unconsciousness combined with positional asphyxia or inhalation of vomit are common causes of death in intoxicated individuals who die of suffocation [81–83]. The toxicity of ethanol is often considerably enhanced by the concomitant use of other drugs with their site of action in the brain, especially opiates, propoxyphene, antidepressants and some sedative hypnotics [84]. […] It seems reasonable to assume that the BAC at autopsy will almost always be lower than the maximum BAC reached during a drinking binge, owing to metabolism of ethanol taking place up until the moment of death [85–87]. During the time after discontinuation of drinking until death, the BAC might decrease appreciably depending on the speed of alcohol elimination from blood, which in heavy drinkers could exceed 20 or 30 mg/100 mL per h (0.02 or 0.03 g% per h) [88].”

“When the supply of oxygen to the body ends, the integrity of cell membranes and tissue compartments gradually disintegrate through the action of various digestive enzymes. This reflects the process of autolysis (self digestion) resulting in a softening and liquefaction of the tissue (freezing the body prevents autolysis). During this process, bacteria from the bowel invade the surrounding tissue and vascular system and the rate of infiltration depends on many factors including the ambient temperature, position of the body and whether death was caused by bacterial infection. Glucose concentrations increase in blood after death and this sugar is probably the simplest substrate for microbial synthesis of ethanol [20,68]. […] Extensive trauma to a body […] increases the potential for spread of bacteria and heightens the risk of ethanol production after death [217]. Blood-ethanol concentrations as high as 190 mg/100 mL have been reported in postmortem blood after particularly traumatic events such as explosions and when no evidence existed to support ingestion of ethanol before the disaster [218].”

v. Interventions based on the Theory of Mind cognitive model for autism spectrum disorder (ASD) (Cochrane review).

“The ‘Theory of Mind’ (ToM) model suggests that people with autism spectrum disorder (ASD) have a profound difficulty understanding the minds of other people – their emotions, feelings, beliefs, and thoughts. As an explanation for some of the characteristic social and communication behaviours of people with ASD, this model has had a significant influence on research and practice. It implies that successful interventions to teach ToM could, in turn, have far-reaching effects on behaviours and outcome.”

“Twenty-two randomised trials were included in the review (N = 695). Studies were highly variable in their country of origin, sample size, participant age, intervention delivery type, and outcome measures. Risk of bias was variable across categories. There were very few studies for which there was adequate blinding of participants and personnel, and some were also judged at high risk of bias in blinding of outcome assessors. There was also evidence of some bias in sequence generation and allocation concealment.”

“Studies were grouped into four main categories according to intervention target/primary outcome measure. These were: emotion recognition studies, joint attention and social communication studies, imitation studies, and studies teaching ToM itself. […] There was very low quality evidence of a positive effect on measures of communication based on individual results from three studies. There was low quality evidence from 11 studies reporting mixed results of interventions on measures of social interaction, very low quality evidence from four studies reporting mixed results on measures of general communication, and very low quality evidence from four studies reporting mixed results on measures of ToM ability. […] While there is some evidence that ToM, or a precursor skill, can be taught to people with ASD, there is little evidence of maintenance of that skill, generalisation to other settings, or developmental effects on related skills. Furthermore, inconsistency in findings and measurement means that evidence has been graded of ‘very low’ or ‘low’ quality and we cannot be confident that suggestions of positive effects will be sustained as high-quality evidence accumulates. Further longitudinal designs and larger samples are needed to help elucidate both the efficacy of ToM-linked interventions and the explanatory value of the ToM model itself.”

vi. Risk of Psychiatric and Neurodevelopmental Disorders Among Siblings of Probands With Autism Spectrum Disorders.

“The Finnish Prenatal Study of Autism and Autism Spectrum Disorders used a population-based cohort that included children born from January 1, 1987, to December 31, 2005, who received a diagnosis of ASD by December 31, 2007. Each case was individually matched to 4 control participants by sex and date and place of birth. […] Among the 3578 cases with ASD (2841 boys [79.4%]) and 11 775 controls (9345 boys [79.4%]), 1319 cases (36.9%) and 2052 controls (17.4%) had at least 1 sibling diagnosed with any psychiatric or neurodevelopmental disorder (adjusted RR, 2.5; 95% CI, 2.3-2.6).”

Conclusions and Relevance Psychiatric and neurodevelopmental disorders cluster among siblings of probands with ASD. For etiologic research, these findings provide further evidence that several psychiatric and neurodevelopmental disorders have common risk factors.”

vii. Treatment for epilepsy in pregnancy: neurodevelopmental outcomes in the child (Cochrane review).

“Accumulating evidence suggests an association between prenatal exposure to antiepileptic drugs (AEDs) and increased risk of both physical anomalies and neurodevelopmental impairment. Neurodevelopmental impairment is characterised by either a specific deficit or a constellation of deficits across cognitive, motor and social skills and can be transient or continuous into adulthood. It is of paramount importance that these potential risks are identified, minimised and communicated clearly to women with epilepsy.”

“Twenty-two prospective cohort studies were included and six registry based studies. Study quality varied. […] the IQ of children exposed to VPA [sodium valproate] (n = 112) was significantly lower than for those exposed to CBZ [carbamazepine] (n = 191) (MD [mean difference] 8.69, 95% CI 5.51 to 11.87, P < 0.00001). […] IQ was significantly lower for children exposed to VPA (n = 74) versus LTG [lamotrigine] (n = 84) (MD -10.80, 95% CI -14.42 to -7.17, P < 0.00001). DQ [developmental quotient] was higher in children exposed to PHT (n = 80) versus VPA (n = 108) (MD 7.04, 95% CI 0.44 to 13.65, P = 0.04). Similarly IQ was higher in children exposed to PHT (n = 45) versus VPA (n = 61) (MD 9.25, 95% CI 4.78 to 13.72, P < 0.0001). A dose effect for VPA was reported in six studies, with higher doses (800 to 1000 mg daily or above) associated with a poorer cognitive outcome in the child. We identified no convincing evidence of a dose effect for CBZ, PHT or LTG. Studies not included in the meta-analysis were reported narratively, the majority of which supported the findings of the meta-analyses.”

“The most important finding is the reduction in IQ in the VPA exposed group, which are sufficient to affect education and occupational outcomes in later life. However, for some women VPA is the most effective drug at controlling seizures. Informed treatment decisions require detailed counselling about these risks at treatment initiation and at pre-conceptual counselling. We have insufficient data about newer AEDs, some of which are commonly prescribed, and further research is required. Most women with epilepsy should continue their medication during pregnancy as uncontrolled seizures also carries a maternal risk.”

Do take note of the effect sizes reported here. To take an example, the difference between being treated with valproate and lamotrigine might equal 10 IQ points in the child – these are huge effects.

June 11, 2017 Posted by | Medicine, Neurology, Pharmacology, Psychiatry, Psychology, Studies | Leave a comment

A few papers

i. Quality of life of adolescents with autism spectrum disorders: comparison to adolescents with diabetes.

“The goals of our study were to clarify the consequences of autistic disorder without mental retardation on […] adolescents’ daily lives, and to consider them in comparison with the impact of a chronic somatic disease (diabetes) […] Scores for adolescents with ASD were significantly lower than those of the control and the diabetic adolescents, especially for friendships, leisure time, and affective and sexual relationships. On the other hand, better scores were obtained for the relationships with parents and teachers and for self-image. […] For subjects with autistic spectrum disorders and without mental retardation, impairment of quality of life is significant in adolescence and young adulthood. Such adolescents are dissatisfied with their relationships, although they often have real motivation to succeed with them.”

As someone who has both conditions, that paper was quite interesting. A follow-up question of some personal interest to me would of course be this: How do the scores/outcomes of these two groups compare to the scores of the people who have both conditions simultaneously? This question is likely almost impossible to answer in any confident manner, certainly if the conditions are not strongly dependent (unlikely), considering the power issues; global prevalence of autism is around 0.6% (link), and although type 1 prevalence is highly variable across countries, the variation just means that in some countries almost nobody gets it whereas in other countries it’s just rare; prevalence varies from 0.5 per 100.000 to 60 per 100.000 children aged 0-15 years. Assuming independence, if you look at combinations of the sort of conditions which affect one in a hundred people with those affecting one in a thousand, you’ll need on average in the order of 100.000 people to pick up just one individual with both of the conditions of interest. It’s bothersome to even try to find people like that, and good luck doing any sort of sensible statistics on that kind of sample. Of course type 1 diabetes prevalence increases with age in a way that autism does not because people continue to be diagnosed with it into late adulthood, whereas most autistics are diagnosed as children, so this makes the rarity of the condition less of a problem in adult samples, but if you’re looking at outcomes it’s arguable whether it makes sense to not differentiate between someone diagnosed with type 1 diabetes as a 35 year old and someone diagnosed as a 5 year old (are these really comparable diseases, and which outcomes are you interested in?). At least that is the case for developed societies where people with type 1 diabetes have high life expectancies; in less developed societies there may be stronger linkage between incidence and prevalence because of high mortality in the patient group (because people who get type 1 diabetes in such countries may not live very long because of inadequate medical care, which means there’s a smaller disconnect between how many new people get the disease during each time period and how many people in total have the disease than is the case for places where the mortality rates are lower). You always need to be careful about distinguishing between incidence and prevalence when dealing with conditions like T1DM with potential high mortality rates in settings where people have limited access to medical care because differential cross-country mortality patterns may be important.

ii. Exercise for depression (Cochrane review).

Background

Depression is a common and important cause of morbidity and mortality worldwide. Depression is commonly treated with antidepressants and/or psychological therapy, but some people may prefer alternative approaches such as exercise. There are a number of theoretical reasons why exercise may improve depression. This is an update of an earlier review first published in 2009.

Objectives

To determine the effectiveness of exercise in the treatment of depression in adults compared with no treatment or a comparator intervention. […]

Selection criteria 

Randomised controlled trials in which exercise (defined according to American College of Sports Medicine criteria) was compared to standard treatment, no treatment or a placebo treatment, pharmacological treatment, psychological treatment or other active treatment in adults (aged 18 and over) with depression, as defined by trial authors. We included cluster trials and those that randomised individuals. We excluded trials of postnatal depression.

Thirty-nine trials (2326 participants) fulfilled our inclusion criteria, of which 37 provided data for meta-analyses. There were multiple sources of bias in many of the trials; randomisation was adequately concealed in 14 studies, 15 used intention-to-treat analyses and 12 used blinded outcome assessors.For the 35 trials (1356 participants) comparing exercise with no treatment or a control intervention, the pooled SMD for the primary outcome of depression at the end of treatment was -0.62 (95% confidence interval (CI) -0.81 to -0.42), indicating a moderate clinical effect. There was moderate heterogeneity (I² = 63%).

When we included only the six trials (464 participants) with adequate allocation concealment, intention-to-treat analysis and blinded outcome assessment, the pooled SMD for this outcome was not statistically significant (-0.18, 95% CI -0.47 to 0.11). Pooled data from the eight trials (377 participants) providing long-term follow-up data on mood found a small effect in favour of exercise (SMD -0.33, 95% CI -0.63 to -0.03). […]

Authors’ conclusions

Exercise is moderately more effective than a control intervention for reducing symptoms of depression, but analysis of methodologically robust trials only shows a smaller effect in favour of exercise. When compared to psychological or pharmacological therapies, exercise appears to be no more effective, though this conclusion is based on a few small trials.”

iii. Risk factors for suicide in individuals with depression: A systematic review.

“The search strategy identified 3374 papers for potential inclusion. Of these, 155 were retrieved for a detailed evaluation. Thirty-two articles fulfilled the detailed eligibility criteria. […] Nineteen studies (28 publications) were included. Factors significantly associated with suicide were: male gender (OR = 1.76, 95% CI = 1.08–2.86), family history of psychiatric disorder (OR = 1.41, 95% CI= 1.00–1.97), previous attempted suicide (OR = 4.84, 95% CI = 3.26–7.20), more severe depression (OR = 2.20, 95% CI = 1.05–4.60), hopelessness (OR = 2.20, 95% CI = 1.49–3.23) and comorbid disorders, including anxiety (OR = 1.59, 95% CI = 1.03–2.45) and misuse of alcohol and drugs (OR = 2.17, 95% CI = 1.77–2.66).
Limitations: There were fewer studies than suspected. Interdependence between risk factors could not be examined.”

iv. Cognitive behaviour therapy for social anxiety in autism spectrum disorder: a systematic review.

“Individuals who have autism spectrum disorders (ASD) commonly experience anxiety about social interaction and social situations. Cognitive behaviour therapy (CBT) is a recommended treatment for social anxiety (SA) in the non-ASD population. Therapy typically comprises cognitive interventions, imagery-based work and for some individuals, behavioural interventions. Whether these are useful for the ASD population is unclear. Therefore, we undertook a systematic review to summarise research about CBT for SA in ASD.”

I mostly include this review here to highlight how reviews aren’t everything – I like them, but you can’t do reviews when a field hasn’t been studied. This is definitely the case here. The review was sort of funny, but also depressing. So much work for so little insight. Here’s the gist of it:

“Using a priori criteria, we searched for English-language peer-reviewed empirical studies in five databases. The search yielded 1364 results. Titles, abstracts and relevant publications were independently screened by two reviewers. Findings: Four single case studies met the review inclusion criteria; data were synthesised narratively. Participants (three adults and one child) were diagnosed with ASD and social anxiety disorder.”

You search the scientific literature systematically, you find more than a thousand results, and you carefully evaluate which ones of them should be included in this kind of study …and what you end up with is 4 individual case studies…

(I won’t go into the results of the study as they’re pretty much worthless.)

v. Immigrant Labor Market Integration across Admission Classes.

“We examine patterns of labor market integration across immigrant groups. The study draws on Norwegian longitudinal administrative data covering labor earnings and social insurance claims over a 25‐year period and presents a comprehensive picture of immigrant‐native employment and social insurance differentials by admission class and by years since entry.”

Some quotes from the paper:

“A recent study using 2011 administrative data from Sweden finds an average employment gap to natives of 30 percentage points for humanitarian migrants (refugees) and 26 percentage point for family immigrants (Luik et al., 2016).”

“A considerable fraction of the immigrants leaves the country after just a few years. […] this is particularly the case for immigrants from the old EU and for students and work-related immigrants from developing countries. For these groups, fewer than 50 percent remain in the country 5 years after entry. For refugees and family migrants, the picture is very different, and around 80 percent appear to have settled permanently in the country. Immigrants from the new EU have a settlement pattern somewhere in between, with approximately 70 percent settled on a permanent basis. An implication of such differential outmigration patterns is that the long-term labor market performance of refugees and family immigrants is of particular economic and fiscal importance. […] the varying rates of immigrant inflows and outflows by admission class, along with other demographic trends, have changed the composition of the adult (25‐66) population between 1990 and 2015. In this population segment, the overall immigrant share increased from 4.9 percent in 1990 to 18.7 percent in 2015 — an increase by a factor of 3.8 over 25 years. […] Following the 2004 EU enlargement, the fraction of immigrants in Norway has increased by a steady rate of approximately one percentage point per year.”

“The trends in population and employment shares varies considerably across admission classes, with employment shares of refugees and family immigrants lagging their growth in population shares. […] In 2014, refugees and family immigrants accounted for 12.8 percent of social insurance claims, compared to 5.7 percent of employment (and 7.7 percent of the adult population). In contrast, the two EU groups made up 9.3 percent of employment (and 8.8 percent of the adult population) but only 3.6 percent of social insurance claimants. Although these patterns do illuminate the immediate (short‐term) fiscal impacts of immigration at each particular point in time, they are heavily influenced by each year’s immigrant composition – in terms of age, years since migration, and admission classes – and therefore provide little information about long‐term consequences and impacts of fiscal sustainability. To assess the latter, we need to focus on longer‐term integration in the Norwegian labor market.”

Which they then proceed to do in the paper. From the results of those analyses:

“For immigrant men, the sample average share in employment (i.e., whose main source of income is work) ranges from 58 percent for refugees to 89 percent for EU immigrants, with family migrants somewhere between (around 80 percent). The average shares with social insurance as the main source of income ranges from only four percent for EU immigrants to as much as 38 percent for refugees. The corresponding shares for native men are 87 percent in employment and 12 percent with social insurance as their main income source. For women, the average shares in employment vary from 46 percent for refugees to 85 percent for new EU immigrants, whereas the average shares in social insurance vary from five percent for new EU immigrants to 42 percent for refugees. The corresponding rates for native women are 80 percent in employment and 17 percent with social insurance as their main source of income.”

“The profiles estimated for refugees are particularly striking. For men, we find that the native‐immigrant employment gap reaches its minimum value at 20 percentage points after five to six years of residence. The gap then starts to increase quite sharply again, and reaches 30 percentage points after 15 years. This development is mirrored by a corresponding increase in social insurance dependency. For female refugees, the employment differential reaches its minimum of 30 percentage points after 5‐9 years of residence. The subsequent decline is less dramatic than what we observe for men, but the differential stands at 35 percentage points 15 years after admission. […] The employment difference between refugees from Bosnia and Somalia is fully 22.2 percentage points for men and 37.7 points for women. […] For immigrants from the old EU, the employment differential is slightly in favor of immigrants regardless of years since migration, and the social insurance differentials remain consistently negative. In other words, employment of old EU immigrants is almost indistinguishable from that of natives, and they are less likely to claim social insurance benefits.”

vi. Glucose Peaks and the Risk of Dementia and 20-Year Cognitive Decline.

“Hemoglobin A1c (HbA1c), a measure of average blood glucose level, is associated with the risk of dementia and cognitive impairment. However, the role of glycemic variability or glucose excursions in this association is unclear. We examined the association of glucose peaks in midlife, as determined by the measurement of 1,5-anhydroglucitol (1,5-AG) level, with the risk of dementia and 20-year cognitive decline.”

“Nearly 13,000 participants from the Atherosclerosis Risk in Communities (ARIC) study were examined. […] Over a median time of 21 years, dementia developed in 1,105 participants. Among persons with diabetes, each 5 μg/mL decrease in 1,5-AG increased the estimated risk of dementia by 16% (hazard ratio 1.16, P = 0.032). For cognitive decline among participants with diabetes and HbA1c <7% (53 mmol/mol), those with glucose peaks had a 0.19 greater z score decline over 20 years (P = 0.162) compared with those without peaks. Among participants with diabetes and HbA1c ≥7% (53 mmol/mol), those with glucose peaks had a 0.38 greater z score decline compared with persons without glucose peaks (P < 0.001). We found no significant associations in persons without diabetes.

CONCLUSIONS Among participants with diabetes, glucose peaks are a risk factor for cognitive decline and dementia. Targeting glucose peaks, in addition to average glycemia, may be an important avenue for prevention.”

vii. Gaze direction detection in autism spectrum disorder.

“Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of gaze direction toward an object is less accurate in autism spectrum disorder. Individuals with autism spectrum disorder (n = 33) and intelligence quotients–matched and age-matched controls (n = 38) were asked to watch a series of synthetic faces looking at objects, and decide which of two objects was looked at. The angle formed by the two possible targets and the face varied following an adaptive procedure, in order to determine individual thresholds. We found that gaze direction detection was less accurate in autism spectrum disorder than in control participants. Our results suggest that the precision of gaze following may be one of the altered processes underlying social interaction difficulties in autism spectrum disorder.”

“Where people look at informs us about what they know, want, or attend to. Atypical or altered detection of gaze direction might thus lead to impoverished acquisition of social information and social interaction. Alternatively, it has been suggested that abnormal monitoring of inner states […], or the lack of social motivation […], would explain the reduced tendency to follow conspecific gaze in individuals with ASD. Either way, a lower tendency to look at the eyes and to follow the gaze would provide fewer opportunities to practice GDD [gaze direction detection – US] ability. Thus, impaired GDD might either play a causal role in atypical social interaction, or conversely be a consequence of it. Exploring GDD earlier in development might help disentangle this issue.”

June 1, 2017 Posted by | Diabetes, Economics, Epidemiology, Medicine, Neurology, Psychiatry, Psychology, Studies | Leave a comment

A few diabetes papers of interest

i. Cost-Effectiveness of Prevention and Treatment of the Diabetic Foot.

“A risk-based Markov model was developed to simulate the onset and progression of diabetic foot disease in patients with newly diagnosed type 2 diabetes managed with care according to guidelines for their lifetime. Mean survival time, quality of life, foot complications, and costs were the outcome measures assessed. Current care was the reference comparison. Data from Dutch studies on the epidemiology of diabetic foot disease, health care use, and costs, complemented with information from international studies, were used to feed the model.

RESULTS—Compared with current care, guideline-based care resulted in improved life expectancy, gain of quality-adjusted life-years (QALYs), and reduced incidence of foot complications. The lifetime costs of management of the diabetic foot following guideline-based care resulted in a cost per QALY gained of <$25,000, even for levels of preventive foot care as low as 10%. The cost-effectiveness varied sharply, depending on the level of foot ulcer reduction attained.

CONCLUSIONS—Management of the diabetic foot according to guideline-based care improves survival, reduces diabetic foot complications, and is cost-effective and even cost saving compared with standard care.”

I won’t go too deeply into the model setup and the results but some of the data they used to feed the model were actually somewhat interesting in their own right, and I have added some of these data below, along with some of the model results.

“It is estimated that 80% of LEAs [lower extremity amputations] are preceded by foot ulcers. Accordingly, it has been demonstrated that preventing the development of foot ulcers in patients with diabetes reduces the frequency of LEAs by 49–85% (6).”

“An annual ulcer incidence rate of 2.1% and an amputation incidence rate of 0.6% were among the reference country-specific parameters derived from this study and adopted in the model.”

“The health outcomes results of the cohort following standard care were comparable to figures reported for diabetic patients in the Netherlands. […] In the 10,000 patients followed until death, a total of 1,780 ulcer episodes occurred, corresponding to a cumulative ulcer incidence of 17.8% and an annual ulcer incidence of 2.2% (mean annual ulcer incidence for the Netherlands is 2.1%) (17). The number of amputations observed was 362 (250 major and 112 minor), corresponding to a cumulative incidence of 3.6% and an annual incidence of 0.4% (mean annual amputation incidence reported for the Netherlands is 0.6%) (17).”

“Cornerstones of guidelines-based care are intensive glycemic control (IGC) and optimal foot care (OFC). Although health benefits and economic efficiency of intensive blood glucose control (8) and foot care programs (914) have been individually reported, the health and economic outcomes and the cost-effectiveness of both interventions have not been determined. […] OFC according to guidelines includes professional protective foot care, education of patients and staff, regular inspection of the feet, identification of the high-risk patient, treatment of nonulcerative lesions, and a multidisciplinary approach to established foot ulcers. […] All cohorts of patients simulated for the different scenarios of guidelines care resulted in improved life expectancy, QALYs gained, and reduced incidence of foot ulcers and LEA compared with standard care. The largest effects on these outcomes were obtained when patients received IGC + OFC. When comparing the independent health effects of the two guidelines strategies, OFC resulted in a greater reduction in ulcer and amputation rates than IGC. Moreover, patients who received IGC + OFC showed approximately the same LEA incidence as patients who received OFC alone. The LEA decrease obtained was proportional to the level of foot ulcer reduction attained.”

“The mean total lifetime costs of a patient under either of the three guidelines care scenarios ranged from $4,088 to $4,386. For patients receiving IGC + OFC, these costs resulted in <$25,000 per QALY gained (relative to standard care). For patients receiving IGC alone, the ICER [here’s a relevant link – US] obtained was $32,057 per QALY gained, and for those receiving OFC alone, this ICER ranged from $12,169 to $220,100 per QALY gained, depending on the level of ulcer reduction attained. […] Increasing the effectiveness of preventive foot care in patients under OFC and IGC + OFC resulted in more QALYs gained, lower costs, and a more favorable ICER. The results of the simulations for the combined scenario (IGC + OFC) were rather insensitive to changes in utility weights and costing parameters. Similar results were obtained for parameter variations in the other two scenarios (IGC and OFC separately).”

“The results of this study suggest that IGC + OFC reduces foot ulcers and amputations and leads to an improvement in life expectancy. Greater health benefits are obtained with higher levels of foot ulcer prevention. Although care according to guidelines increases health costs, the cost per QALY gained is <$25,000, even for levels of preventive foot care as low as 10%. ICERs of this order are cost-effective according to the stratification of interventions for diabetes recently proposed (32). […] IGC falls into the category of a possibly cost-effective intervention in the management of the diabetic foot. Although it does not produce significant reduction in foot ulcers and LEA, its effectiveness resides in the slowing of neuropathy progression rates.

Extrapolating our results to a practical situation, if IGC + OFC was to be given to all diabetic patients in the Netherlands, with the aim of reducing LEA by 50% (St. Vincent’s declaration), the cost per QALY gained would be $12,165 and the cost for managing diabetic ulcers and amputations would decrease by 53 and 58%, respectively. From a policy perspective, this is clearly cost-effective and cost saving compared with current care.”

ii. Early Glycemic Control, Age at Onset, and Development of Microvascular Complications in Childhood-Onset Type 1 Diabetes.

“The aim of this work was to study the impact of glycemic control (HbA1c) early in disease and age at onset on the occurrence of incipient diabetic nephropathy (MA) and background retinopathy (RP) in childhood-onset type 1 diabetes.

RESEARCH DESIGN AND METHODS—All children, diagnosed at 0–14 years in a geographically defined area in northern Sweden between 1981 and 1992, were identified using the Swedish Childhood Diabetes Registry. From 1981, a nationwide childhood diabetes care program was implemented recommending intensified insulin treatment. HbA1c and urinary albumin excretion were analyzed, and fundus photography was performed regularly. Retrospective data on all 94 patients were retrieved from medical records and laboratory reports.

RESULTS—During the follow-up period, with a mean duration of 12 ± 4 years (range 5–19), 17 patients (18%) developed MA, 45 patients (48%) developed RP, and 52% had either or both complications. A Cox proportional hazard regression, modeling duration to occurrence of MA or RP, showed that glycemic control (reflected by mean HbA1c) during the follow-up was significantly associated with both MA and RP when adjusted for sex, birth weight, age at onset, and tobacco use as potential confounders. Mean HbA1c during the first 5 years of diabetes was a near-significant determinant for development of MA (hazard ratio 1.41, P = 0.083) and a significant determinant of RP (1.32, P = 0.036). The age at onset of diabetes significantly influenced the risk of developing RP (1.11, P = 0.021). Thus, in a Kaplan-Meier analysis, onset of diabetes before the age of 5 years, compared with the age-groups 5–11 and >11 years, showed a longer time to occurrence of RP (P = 0.015), but no clear tendency was seen for MA, perhaps due to lower statistical power.

CONCLUSIONS—Despite modern insulin treatment, >50% of patients with childhood-onset type 1 diabetes developed detectable diabetes complications after ∼12 years of diabetes. Inadequate glycemic control, also during the first 5 years of diabetes, seems to accelerate time to occurrence, whereas a young age at onset of diabetes seems to prolong the time to development of microvascular complications. […] The present study and other studies (15,54) indicate that children with an onset of diabetes before the age of 5 years may have a prolonged time to development of microvascular complications. Thus, the youngest age-groups, who are most sensitive to hypoglycemia with regard to risk of persistent brain damage, may have a relative protection during childhood or a longer time to development of complications.”

It’s important to note that although some people reading the study may think this is all ancient history (people diagnosed in the 80es?), to a lot of people it really isn’t. The study is of great personal interest to me, as I was diagnosed in ’87; if it had been a Danish study rather than a Swedish one I might well have been included in the analysis.

Another note to add in the context of the above coverage is that unlike what the authors of the paper seem to think/imply, hypoglycemia may not be the only relevant variable of interest in the context of the effect of childhood diabetes on brain development, where early diagnosis has been observed to tend to lead to less favourable outcomes – other variables which may be important include DKA episodes and perhaps also chronic hyperglycemia during early childhood. See this post for more stuff on these topics.

Some more stuff from the paper:

“The annual incidence of type 1 diabetes in northern Sweden in children 0–14 years of age is now ∼31/100,000. During the time period 1981–1992, there has been an increase in the annual incidence from 19 to 31/100,000 in northern Sweden. This is similar to the rest of Sweden […]. Seventeen (18%) of the 94 patients fulfilled the criteria for MA during the follow-up period. None of the patients developed overt nephropathy, elevated serum creatinine, or had signs of any other kidney disorder, e.g., hematuria, during the follow-up period. […] The mean time to diagnosis of MA was 9 ± 3 years (range 4–15) from diabetes onset. Forty-five (48%) of the 94 patients fulfilled the criteria for RP during the follow-up period. None of the patients developed proliferative retinopathy or were treated with photocoagulation. The mean time to diagnosis of RP was 11 ± 4 years (range 4–19) from onset of diabetes. Of the 45 patients with RP, 13 (29%) had concomitant MA, and thus 13 (76.5%) of the 17 patients with MA had concomitant RP. […] Altogether, among the 94 patients, 32 (34%) had isolated RP, 4 (4%) had isolated MA, and 13 (14%) had combined RP and MA. Thus, 49 (52%) patients had either one or both complications and, hence, 45 (48%) had neither of these complications.”

“When modeling MA as a function of glycemic level up to the onset of MA or during the entire follow-up period, adjusting for sex, birth weight, age at onset of diabetes, and tobacco use, only glycemic control had a significant effect. An increase in hazard ratio (HR) of 83% per one percentage unit increase in mean HbA1c was seen. […] The increase in HR of developing RP for each percentage unit rise in HbA1c during the entire follow-up period was 43% and in the early period 32%. […] Age at onset of diabetes was a weak but significant independent determinant for the development of RP in all regression models (P = 0.015, P = 0.018, and P = 0.010, respectively). […] Despite that this study was relatively small and had a retrospective design, we were able to show that the glycemic level already during the first 5 years may be an important predictor of later development of both MA and RP. This is in accordance with previous prospective follow-up studies (16,30).”

“Previously, male sex, smoking, and low birth weight have been shown to be risk factors for the development of nephropathy and retinopathy (6,4549). However, in this rather small retrospective study with a limited follow-up time, we could not confirm these associations”. This may just be because of lack of power, it’s a relatively small study. Again, this is/was of personal interest to me; two of those three risk factors apply to me, and neither of those risk factors are modifiable.

iii. Eighteen Years of Fair Glycemic Control Preserves Cardiac Autonomic Function in Type 1 Diabetes.

“Reduced cardiovascular autonomic function is associated with increased mortality in both type 1 and type 2 diabetes (14). Poor glycemic control plays an important role in the development and progression of diabetic cardiac autonomic dysfunction (57). […] Diabetic cardiovascular autonomic neuropathy (CAN) can be defined as impaired function of the peripheral autonomic nervous system. Exercise intolerance, resting tachycardia, and silent myocardial ischemia may be early signs of cardiac autonomic dysfunction (9).The most frequent finding in subclinical and symptomatic CAN is reduced heart rate variability (HRV) (10). […] No other studies have followed type 1 diabetic patients on intensive insulin treatment during ≥14-year periods and documented cardiac autonomic dysfunction. We evaluated the association between 18 years’ mean HbA1c and cardiac autonomic function in a group of type 1 diabetic patients with 30 years of disease duration.”

“A total of 39 patients with type 1 diabetes were followed during 18 years, and HbA1c was measured yearly. At 18 years follow-up heart rate variability (HRV) measurements were used to assess cardiac autonomic function. Standard cardiac autonomic tests during normal breathing, deep breathing, the Valsalva maneuver, and the tilt test were performed. Maximal heart rate increase during exercise electrocardiogram and minimal heart rate during sleep were also used to describe cardiac autonomic function.

RESULTS—We present the results for patients with mean HbA1c <8.4% (two lowest HbA1c tertiles) compared with those with HbA1c ≥8.4% (highest HbA1c tertile). All of the cardiac autonomic tests were significantly different in the high- and the low-HbA1c groups, and the most favorable scores for all tests were seen in the low-HbA1c group. In the low-HbA1c group, the HRV was 40% during deep breathing, and in the high-HbA1c group, the HRV was 19.9% (P = 0.005). Minimal heart rate at night was significantly lower in the low-HbA1c groups than in the high-HbA1c group (P = 0.039). With maximal exercise, the increase in heart rate was significantly higher in the low-HbA1c group compared with the high-HbA1c group (P = 0.001).

CONCLUSIONS—Mean HbA1c during 18 years was associated with cardiac autonomic function. Cardiac autonomic function was preserved with HbA1c <8.4%, whereas cardiac autonomic dysfunction was impaired in the group with HbA1c ≥8.4%. […] The study underlines the importance of good glycemic control and demonstrates that good long-term glycemic control is associated with preserved cardiac autonomic function, whereas a lack of good glycemic control is associated with cardiac autonomic dysfunction.”

These results are from Norway (Oslo), and again they seem relevant to me personally (‘from a statistical point of view’) – I’ve had diabetes for about as long as the people they included in the study.

iv. The Mental Health Comorbidities of Diabetes.

“Individuals living with type 1 or type 2 diabetes are at increased risk for depression, anxiety, and eating disorder diagnoses. Mental health comorbidities of diabetes compromise adherence to treatment and thus increase the risk for serious short- and long-term complications […] Young adults with type 1 diabetes are especially at risk for poor physical and mental health outcomes and premature mortality. […] we summarize the prevalence and consequences of mental health problems for patients with type 1 or type 2 diabetes and suggest strategies for identifying and treating patients with diabetes and mental health comorbidities.”

“Major advances in the past 2 decades have improved understanding of the biological basis for the relationship between depression and diabetes.2 A bidirectional relationship might exist between type 2 diabetes and depression: just as type 2 diabetes increases the risk for onset of major depression, a major depressive disorder signals increased risk for on set of type 2 diabetes.2 Moreover, diabetes distress is now recognized as an entity separate from major depressive disorder.2 Diabetes distress occurs because virtually all of diabetes care involves self-management behavior—requiring balance of a complex set of behavioral tasks by the person and family, 24 hours a day, without “vacation” days. […] Living with diabetes is associated with a broad range of diabetes-related distresses, such as feeling over-whelmed with the diabetes regimen; being concerned about the future and the possibility of serious complications; and feeling guilty when management is going poorly. This disease burden and emotional distress in individuals with type 1 or type 2 diabetes, even at levels of severity below the threshold for a psychiatric diagnosis of depression or anxiety, are associated with poor adherence to treatment, poor glycemic control, higher rates of diabetes complications, and impaired quality of life. […] Depression in the context of diabetes is […] associated with poor self-care with respect to diabetes treatment […] Depression among individuals with diabetes is also associated with increased health care use and expenditures, irrespective of age, sex, race/ethnicity, and health insurance status.3

“Women with type 1 diabetes have a 2-fold increased risk for developing an eating disorder and a 1.9-fold increased risk for developing subthreshold eating disorders than women without diabetes.6 Less is known about eating disorders in boys and men with diabetes. Disturbed eating behaviors in women with type 1 diabetes include binge eating and caloric purging through insulin restriction, with rates of these disturbed eating behaviors reported to occur in 31% to 40% of women with type 1 diabetes aged between 15 and 30 years.6 […] disordered eating behaviors persist and worsen over time. Women with type 1 diabetes and eating disorders have poorer glycemic control, with higher rates of hospitalizations and retinopathy, neuropathy, and premature death compared with similarly aged women with type 1 diabetes without eating disorders.6 […] few diabetes clinics provide mental health screening or integrate mental/behavioral health services in diabetes clinical care.4 It is neither practical nor affordable to use standardized psychiatric diagnostic interviews to diagnose mental health comorbidities in individuals with diabetes. Brief paper-and-pencil self-report measures such as the Beck Depression Inventory […] that screen for depressive symptoms are practical in diabetes clinical settings, but their use remains rare.”

The paper does not mention this, but it is important to note that there are multiple plausible biological pathways which might help to explain bidirectional linkage between depression and type 2 diabetes. Physiological ‘stress’ (think: inflammation) is likely to be an important factor, and so are the typical physiological responses to some of the pharmacological treatments used to treat depression (…as well as other mental health conditions); multiple drugs used in psychiatry, including tricyclic antidepressants, cause weight gain and have proven diabetogenic effects – I’ve covered these topics before here on the blog. I’ve incidentally also covered other topics touched briefly upon in the paper – here’s for example a more comprehensive post about screening for depression in the diabetes context, and here’s a post with some information about how one might go about screening for eating disorders; skin signs are important. I was a bit annoyed that the author of the above paper did not mention this, as observing whether or not Russell’s sign – which is a very reliable indicator of eating disorder – is present or not is easier/cheaper/faster than performing any kind of even semi-valid depression screen.

v. Diabetes, Depression, and Quality of Life. This last one covers topics related to the topics covered in the paper above.

“The study consisted of a representative population sample of individuals aged ≥15 years living in South Australia comprising 3,010 personal interviews conducted by trained health interviewers. The prevalence of depression in those suffering doctor-diagnosed diabetes and comparative effects of diabetic status and depression on quality-of-life dimensions were measured.

RESULTS—The prevalence of depression in the diabetic population was 24% compared with 17% in the nondiabetic population. Those with diabetes and depression experienced an impact with a large effect size on every dimension of the Short Form Health-Related Quality-of-Life Questionnaire (SF-36) as compared with those who suffered diabetes and who were not depressed. A supplementary analysis comparing both depressed diabetic and depressed nondiabetic groups showed there were statistically significant differences in the quality-of-life effects between the two depressed populations in the physical and mental component summaries of the SF-36.

CONCLUSIONS—Depression for those with diabetes is an important comorbidity that requires careful management because of its severe impact on quality of life.”

I felt slightly curious about the setup after having read this, because representative population samples of individuals should not in my opinion yield depression rates of either 17% nor 24%. Rates that high suggest to me that the depression criteria used in the paper are a bit ‘laxer’/more inclusive than what you see in some other contexts when reading this sort of literature – to give an example of what I mean, the depression screening post I link to above noted that clinical or major depression occurred in 11.4% of people with diabetes, compared to a non-diabetic prevalence of 5%. There’s a long way from 11% to 24% and from 5% to 17%. Another potential explanation for such a high depression rate could of course also be some sort of selection bias at the data acquisition stage, but that’s obviously not the case here. However 3000 interviews is a lot of interviews, so let’s read on…

“Several studies have assessed the impact of depression in diabetes in terms of the individual’s functional ability or quality of life (3,4,13). Brown et al. (13) examined preference-based time tradeoff utility values associated with diabetes and showed that those with diabetes were willing to trade a significant proportion of their remaining life in return for a diabetes-free health state.”

“Depression was assessed using the mood module of the Primary Care Evaluation of Mental Disorders questionnaire. This has been validated to provide estimates of mental disorder comparable with those found using structured and longer diagnostic interview schedules (16). The mental disorders examined in the questionnaire included major depressive disorder, dysthymia, minor depressive disorder, and bipolar disorder. [So yes, the depression criteria used in this study are definitely more inclusive than depression criteria including only people with MDD] […] The Short Form Health-Related Quality-of-Life Questionnaire (SF-36) was also included to assess the quality of life of the different population groups with and without diabetes. […] Five groups were examined: the overall population without diabetes and without depression; the overall diabetic population; the depression-only population; the diabetic population without depression; and the diabetic population with depression.”

“Of the population sample, 205 (6.8%) were classified as having major depression, 130 (4.3%) had minor depression, 105 (3.5%) had partial remission of major depression, 79 (2.6%) had dysthymia, and 5 (0.2%) had bipolar disorder (depressed phase). No depressive syndrome was detected in 2,486 (82.6%) respondents. The population point prevalence of doctor-diagnosed diabetes in this survey was 5.2% (95% CI 4.6–6.0). The prevalence of depression in the diabetic population was 23.6% (22.1–25.1) compared with 17.1% (15.8–18.4) in the nondiabetic population. This difference approached statistical significance (P = 0.06). […] There [was] a clear difference in the quality-of-life scores for the diabetic and depression group when compared with the diabetic group without depression […] Overall, the highest quality-of-life scores are experienced by those without diabetes and depression and the lowest by those with diabetes and depression. […] the standard scores of those with no diabetes have quality-of-life status comparable with the population mean or slightly better. At the other extreme those with diabetes and depression experience the most severe comparative impact on quality-of-life for every dimension. Between these two extremes, diabetes overall and the diabetes without depression groups have a moderate-to-severe impact on the physical functioning, role limitations (physical), and general health scales […] The results of the two-factor ANOVA showed that the interaction term was significant only for the PCS [Physical Component Score – US] scale, indicating a greater than additive effect of diabetes and depression on the physical health dimension.”

“[T]here was a significant interaction between diabetes and depression on the PCS but not on the MCS [Mental Component Score. Do note in this context that the no-interaction result is far from certain, because as they observe: “it may simply be sample size that has not allowed us to observe a greater than additive effect in the MCS scale. Although there was no significant interaction between diabetes and depression and the MCS scale, we did observe increases on the effect size for the mental health dimensions”]. One explanation for this finding might be that depression can influence physical outcomes, such as recovery from myocardial infarction, survival with malignancy, and propensity to infection. Various mechanisms have been proposed for this, including changes to the immune system (24). Other possibilities are that depression in diabetes may affect the capacity to maintain medication vigilance, maintain a good diet, and maintain other lifestyle factors, such as smoking and exercise, all of which are likely possible pathways for a greater than additive effect. Whatever the mechanism involved, these data indicate that the addition of depression to diabetes has a severe impact on quality of life, and this needs to be managed in clinical practice.”

May 25, 2017 Posted by | Cardiology, Diabetes, Health Economics, Medicine, Nephrology, Neurology, Ophthalmology, Papers, Personal, Pharmacology, Psychiatry, Psychology | Leave a comment

A few diabetes papers of interest

A couple of weeks ago I decided to cover some of the diabetes articles I’d looked at and bookmarked in the past, but there were a lot of articles and I did not get very far. This post will cover some more of these articles I had failed to cover here despite intending to do so at some point. Considering that I these days relatively regularly peruse e.g. the Diabetes Care archives I am thinking of making this sort of post a semi-regular feature of the blog.

i. Association Between Diabetes and Hippocampal Atrophy in Elderly Japanese: The Hisayama Study.

“A total of 1,238 community-dwelling Japanese subjects aged ≥65 years underwent brain MRI scans and a comprehensive health examination in 2012. Total brain volume (TBV), intracranial volume (ICV), and hippocampal volume (HV) were measured using MRI scans for each subject. We examined the associations between diabetes-related parameters and the ratios of TBV to ICV (an indicator of global brain atrophy), HV to ICV (an indicator of hippocampal atrophy), and HV to TBV (an indicator of hippocampal atrophy beyond global brain atrophy) after adjustment for other potential confounders.”

“The multivariable-adjusted mean values of the TBV-to-ICV, HV-to-ICV, and HV-to-TBV ratios were significantly lower in the subjects with diabetes compared with those without diabetes (77.6% vs. 78.2% for the TBV-to-ICV ratio, 0.513% vs. 0.529% for the HV-to-ICV ratio, and 0.660% vs. 0.676% for the HV-to-TBV ratio; all P < 0.01). These three ratios decreased significantly with elevated 2-h postload glucose (PG) levels […] Longer duration of diabetes was significantly associated with lower TBV-to-ICV, HV-to-ICV, and HV-to-TBV ratios. […] Our data suggest that a longer duration of diabetes and elevated 2-h PG levels, a marker of postprandial hyperglycemia, are risk factors for brain atrophy, particularly hippocampal atrophy.”

“Intriguingly, our findings showed that the subjects with diabetes had significantly lower mean HV-to-TBV ratio values, indicating […] that the hippocampus is predominantly affected by diabetes. In addition, in our subjects a longer duration and a midlife onset of diabetes were significantly associated with a lower HV, possibly suggesting that a long exposure of diabetes particularly worsens hippocampal atrophy.”

The reason why hippocampal atrophy is a variable of interest to these researchers is that hippocampal atrophy is a feature of Alzheimer’s Disease, and diabetics have an elevated risk of AD. This is incidentally far from the first study providing some evidence for the existence of potential causal linkage between impaired glucose homeostasis and AD (see e.g. also this paper, which I’ve previously covered here on the blog).

ii. A Population-Based Study of All-Cause Mortality and Cardiovascular Disease in Association With Prior History of Hypoglycemia Among Patients With Type 1 Diabetes.

“Although patients with T1DM may suffer more frequently from hypoglycemia than those with T2DM (8), very few studies have investigated whether hypoglycemia may also increase the risk of CVD (6,9,10) or death (1,6,7) in patients with T1DM; moreover, the results of these studies have been inconclusive (6,9,10) because of the dissimilarities in their methodological aspects, including their enrollment of populations with T1DM with different levels of glycemic control, application of different data collection methods, and adoption of different lengths of observational periods.”

“Only a few population-based studies have examined the potential cumulative effect of repeated severe hypoglycemia on all-cause mortality or CVD incidence in T1DM (9). The Action to Control Cardiovascular Risk in Diabetes (ACCORD) study of T2DM found a weakly inverse association between the annualized number of hypoglycemic episodes and the risk of death (11,12). By contrast, some studies find that repeated hypoglycemia may be an aggravating factor to atherosclerosis in T1DM (13,14). Studies on the compromised sympathetic-adrenal reaction in patients with repeated hypoglycemia have been inconclusive regarding whether such a reaction may further damage intravascular coagulation and thrombosis (15) or decrease the vulnerability of these patients to adverse health outcomes (12).

Apart from the lack of information on the potential dose–gradient effect associated with severe hypoglycemic events in T1DM from population-based studies, the risks of all-cause mortality/CVD incidence associated with severe hypoglycemia occurring at different periods before all-cause mortality/CVD incidence have never been examined. In this study, we used the population-based medical claims of a cohort of patients with T1DM to examine whether the risks of all-cause mortality/CVD incidence are associated with previous episodes of severe hypoglycemia in different periods and whether severe hypoglycemia may pose a dose–gradient effect on the risks of all-cause mortality/CVD incidence.”

“Two nested case-control studies with age- and sex-matched control subjects and using the time-density sampling method were performed separately within a cohort of 10,411 patients with T1DM in Taiwan. The study enrolled 564 nonsurvivors and 1,615 control subjects as well as 743 CVD case subjects and 1,439 control subjects between 1997 and 2011. History of severe hypoglycemia was identified during 1 year, 1–3 years, and 3–5 years before the occurrence of the study outcomes.”

“Prior severe hypoglycemic events within 1 year were associated with higher risks of all-cause mortality and CVD (adjusted OR 2.74 [95% CI 1.96–3.85] and 2.02 [1.35–3.01], respectively). Events occurring within 1–3 years and 3–5 years before death were also associated with adjusted ORs of 1.94 (95% CI 1.39–2.71) and 1.68 (1.15–2.44), respectively. Significant dose–gradient effects of severe hypoglycemia frequency on mortality and CVD were observed within 5 years. […] we found that a greater frequency of severe hypoglycemia occurring 1 year before death was significantly associated with a higher OR of all-cause mortality (1 vs. 0: 2.45 [95% CI 1.65–3.63]; ≥2 vs. 0: 3.49 [2.01–6.08], P < 0.001 for trend). Although the strength of the association was attenuated, a significant dose–gradient effect still existed for severe hypoglycemia occurring in 1–3 years (P < 0.001 for trend) and 3–5 years (P < 0.015 for trend) before death. […] Exposure to repeated severe hypoglycemic events can lead to higher risks of mortality and CVD.”

“Our findings are supported by two previous studies that investigated atherosclerosis risk in T1DM (13,14). The DCCT/EDIC project reported that the prevalence of coronary artery calcification, an established atherosclerosis marker, was linearly correlated with the incidence rate of hypoglycemia on the DCCT stage (14). Giménez et al. (13) also demonstrated that repeated episodes of hypoglycemia were an aggravating factor for preclinical atherosclerosis in T1DM. […] The mechanism of hypoglycemia that predisposes to all-cause mortality/CVD incidence remains unclear.”

iii. Global Estimates on the Number of People Blind or Visually Impaired by Diabetic Retinopathy: A Meta-analysis From 1990 to 2010.

“On the basis of previous large-scale population-based studies and meta-analyses, diabetic retinopathy (DR) has been recognized as one of the most common and important causes for visual impairment and blindness (1–19). These studies in general showed that DR was the leading cause of blindness globally among working-aged adults and therefore has a significant socioeconomic impact (20–22).”

“A previous meta-analysis (21) summarizing 35 studies with more than 20,000 patients with diabetes estimated a prevalence of any DR of 34.6%, of diabetic macular edema of 6.8%, and of vision-threating DR of 10.2% within the diabetes population. […] Yau et al. (21) estimated that ∼93 million people had some DR and 28 million people had sight-threatening stages of DR. However, this meta-analysis did not address the prevalence of visual impairment and blindness due to DR and thus the impact of DR on the general population. […] We therefore conducted the present meta-analysis of all available population-based studies performed worldwide within the last two decades as part of the Global Burden of Disease Study 2010 (GBD) to estimate the number of people affected by blindness and visual impairment.”

“DR [Diabetic Retinopathy] ranks as the fifth most common cause of global blindness and of global MSVI [moderate and severe vision impairment] (25). […] this analysis estimates that, in 2010, 1 out of every 39 blind people had blindness due to DR and 1 out of every 52 people had visual impairment due to DR. […] Globally in 2010, out of overall 32.4 million blind and 191 million visually impaired people, 0.8 million were blind and 3.7 million were visually impaired because of DR, with an alarming increase of 27% and 64%, respectively, spanning the two decades from 1990 to 2010. DR accounted for 2.6% of all blindness in 2010 and 1.9% of all MSVI worldwide, increasing from 2.1% and 1.3%, respectively, in 1990. […] The number of persons with visual impairment due to DR worldwide is rising and represents an increasing proportion of all blindness/MSVI causes. Age-standardized prevalence of DR-related blindness/MSVI was higher in sub-Saharan Africa and South Asia.”

“Our data suggest that the percentage of blindness and MSVI attributable to DR was lower in low-income regions with younger populations than in high-income regions with older populations. There are several reasons that may explain this observation. First, low-income societies may have a higher percentage of unoperated cataract or undercorrected refractive error–related blindness and MSVI (25), which is probably related to access to visual and ocular health services. Therefore, the proportional increase in blindness and MSVI attributable to DR may be rising because of the decreasing proportion attributable to cataract (25) as a result of the increasing availability of cataract surgery in many parts of the world (29) during the past decade. Improved visualization of the fundus afforded by cataract surgery should also improve the detection of DR. The increase in the percentage of global blindness caused by DR within the last two decades took place in all world regions except Western Europe and high-income North America where there was a slight decrease. This decrease may reflect the effect of intensified prevention and treatment of DR possibly in part due to the introduction of intravitreal injections of steroids and anti-VEGF (vascular endothelial growth factor) drugs (30,31).

Second, in regions with poor medical infrastructure, patients with diabetes may not live long enough to experience DR (32). This reduces the number of patients with diabetes, and, furthermore, it reduces the number of patients with DR-related vision loss. Studies in the literature have reported that the prevalence of severe DR decreased from 1990 to 2010 (21) while the prevalence of diabetes simultaneously increased (27), which implies a reduction in the prevalence of severe DR per person with diabetes. […] Third, […] younger populations may have a lower prevalence of diabetes (33). […] Therefore, considering further economic development in rural regions, improvements in medical infrastructure, the general global demographic transition to elderly populations, and the association between increasing economic development and obesity, we project the increase in the proportion of DR-related blindness and MSVI to continue to rise in the future.”

iv. Do Patient Characteristics Impact Decisions by Clinicians on Hemoglobin A1c Targets?

“In setting hemoglobin A1c (HbA1c) targets, physicians must consider individualized risks and benefits of tight glycemic control (1,2) by recognizing that the risk-benefit ratio may become unfavorable in certain patients, including the elderly and/or those with multiple comorbidities (3,4). Customization of treatment goals based on patient characteristics is poorly understood, partly due to insufficient data on physicians’ decisions in setting targets. We used the National Health and Nutrition Examination Survey (NHANES) to analyze patient-reported HbA1c targets set by physicians and to test whether targets are correlated with patient characteristics.”

“we did not find any evidence that U.S. physicians systematically consider important patient-specific information when selecting the intensity of glycemic control. […] the lack of variation with patient characteristics suggests overreliance on a general approach, without consideration of individual variation in the risks and benefits (or patient preference) of tight control.”

v. Cardiovascular Autonomic Neuropathy, Sexual Dysfunction, and Urinary Incontinence in Women With Type 1 Diabetes.

“This study evaluated associations among cardiovascular autonomic neuropathy (CAN), female sexual dysfunction (FSD), and urinary incontinence (UI) in women with type I diabetes mellitus (T1DM). […] We studied 580 women with T1DM in the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Study (DCCT/EDIC).”

“At EDIC year 17, FSD was observed in 41% of women and UI in 30%. […] We found that CAN was significantly more prevalent among women with FSD and/or UI, because 41% of women with FSD and 44% with UI had positive measures of CAN compared with 30% without FSD and 38% without UI at EDIC year 16/17. We also observed bivariate associations between FSD and several measures of CAN […] In long-standing T1DM, CAN may predict development of FSD and may be a useful surrogate for generalized diabetic autonomic neuropathy.”

“Although autonomic dysfunction has been considered an important factor in the etiology of many diabetic complications, including constipation, exercise intolerance, bladder dysfunction, erectile dysfunction, orthostatic hypotension, and impaired neurovascular function, our study is among the first to systematically demonstrate a link between CAN and FSD in a large cohort of well-characterized patients with T1DM (14).”

vi. Correlates of Medication Adherence in the TODAY Cohort of Youth With Type 2 Diabetes.

“A total of 699 youth 10–17 years old with recent-onset type 2 diabetes and ≥80% adherence to metformin therapy for ≥8 weeks during a run-in period were randomized to receive one of three treatments. Participants took two study pills twice daily. Adherence was calculated by pill count from blister packs returned at visits. High adherence was defined as taking ≥80% of medication; low adherence was defined as taking <80% of medication.”

“In this low socioeconomic cohort, high and low adherence did not differ by sex, age, family income, parental education, or treatment group. Adherence declined over time (72% high adherence at 2 months, 56% adherence at 48 months, P < 0.0001). A greater percentage of participants with low adherence had clinically significant depressive symptoms at baseline (18% vs. 12%, P = 0.0415). No adherence threshold predicted the loss of glycemic control. […] Most pediatric type 1 diabetes studies (5–7) consistently document a correlation between adherence and race, ethnicity, and socioeconomic status, and studies of adults with type 2 diabetes (8,9) have documented that depressed patients are less adherent to their diabetes regimen. There is a dearth of information in the literature regarding adherence to medication in pediatric patients with type 2 diabetes.”

“In the cohort, the presence of baseline clinically significant depressive symptoms was associated with subsequent lower adherence. […] The TODAY cohort demonstrated deterioration in study medication adherence over time, irrespective of treatment group assignment. […] Contrary to expectation, demographic factors (sex, race-ethnicity, household income, and parental educational level) did not predict medication adherence. The lack of correlation with these factors in the TODAY trial may be explained by the limited income and educational range of the families in the TODAY trial. Nearly half of the families in the TODAY trial had an annual income of <$25,000, and, for over half of the families, the highest level of parental education was a high school degree or lower. In addition, our run-in criteria selected for more adherent subjects. All subjects had to have >80% adherence to M therapy for ≥8 weeks before they could be randomized. This may have limited variability in medication adherence postrandomization. It is also possible that selecting for more adherent subjects in the run-in period also selected for subjects with a lower frequency of depressive symptoms.”

“In the TODAY trial, baseline clinically significant depressive symptoms were more prevalent in the lower-adherence group, suggesting that regular screening for depressive symptoms should be undertaken to identify youth who were at high risk for poor medication adherence. […] Studies in adults with type 2 diabetes (2328) consistently report that depressed patients are less adherent to their diabetes regimen and experience more physical complications of diabetes. Identifying youth who are at risk for poor medication adherence early in the course of disease would make it possible to provide support and, if needed, specific treatment. Although we were not able to determine whether the treatment of depressive symptoms changed adherence over time, our findings support the current guidelines for psychosocial screening in youth with diabetes (29,30).”

vii. Increased Risk of Incident Chronic Kidney Disease, Cardiovascular Disease, and Mortality in Patients With Diabetes With Comorbid Depression.

Another depression-related paper, telling another part of the story. If depressed diabetics are less compliant/adherent, which seems – as per the above study – to be the case both in the context of the adult and pediatric patient population, then you might also expect this reduced compliance/adherence to ‘translate’ into this group having poorer metabolic control, and thus be at higher risk of developing microvascular complications such as nephropathy. This seems to be what we observe, at least according to the findings of this study:

“It is not known if patients with diabetes with depression have an increased risk of chronic kidney disease (CKD). We examined the association between depression and incident CKD, mortality, and incident cardiovascular events in U.S. veterans with diabetes.”

“Among a nationally representative prospective cohort of >3 million U.S. veterans with baseline estimated glomerular filtration rate (eGFR) ≥60 mL/min/1.73 m2, we identified 933,211 patients with diabetes. Diabetes was ascertained by an ICD-9-CM code for diabetes, an HbA1c >6.4%, or receiving antidiabetes medication during the inclusion period. Depression was defined by an ICD-9-CM code for depression or by antidepressant use during the inclusion period. Incident CKD was defined as two eGFR levels 2 separated by ≥90 days and a >25% decline in baseline eGFR.”

“Depression was associated with 20% higher risk of incident CKD (adjusted hazard ratio [aHR] and 95% CI: 1.20 [1.19–1.21]). Similarly, depression was associated with increased all-cause mortality (aHR and 95% CI: 1.25 [1.24–1.26]). […] The presence of depression in patients with diabetes is associated with higher risk of developing CKD compared with nondepressed patients.”

It’s important to remember that the higher reported eGFRs in the depressed patient group may not be important/significant, and they should not be taken as an indication of relatively better kidney function in this patient population – especially in the type 2 context, the relationship between eGFR and kidney function is complicated. I refer to Bakris et al.‘s text on these topics for details (blog coverage here).

May 6, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Ophthalmology, Psychology, Studies | Leave a comment

A few diabetes papers of interest

1. Cognitive Dysfunction in Older Adults With Diabetes: What a Clinician Needs to Know. I’ve talked about these topics before here on the blog (see e.g. these posts on related topics), but this is a good summary article. I have added some observations from the paper below:

“Although cognitive dysfunction is associated with both type 1 and type 2 diabetes, there are several distinct differences observed in the domains of cognition affected in patients with these two types. Patients with type 1 diabetes are more likely to have diminished mental flexibility and slowing of mental speed, whereas learning and memory are largely not affected (8). Patients with type 2 diabetes show decline in executive function, memory, learning, attention, and psychomotor efficiency (9,10).”

“So far, it seems that the risk of cognitive dysfunction in type 2 diabetes may be influenced by glycemic control, hypoglycemia, inflammation, depression, and macro- and microvascular pathology (14). The cumulative impact of these conditions on the vascular etiology may further decrease the threshold at which cognition is affected by other neurological conditions in the aging brain. In patients with type 1 diabetes, it seems as though diabetes has a lesser impact on cognitive dysfunction than those patients with type 2 diabetes. […] Thus, the cognitive decline in patients with type 1 diabetes may be mild and may not interfere with their functionality until later years, when other aging-related factors become important. […] However, recent studies have shown a higher prevalence of cognitive dysfunction in older patients (>60 years of age) with type 1 diabetes (5).”

“Unlike other chronic diseases, diabetes self-care involves many behaviors that require various degrees of cognitive pliability and insight to perform proper self-care coordination and planning. Glucose monitoring, medications and/or insulin injections, pattern management, and diet and exercise timing require participation from different domains of cognitive function. In addition, the recognition, treatment, and prevention of hypoglycemia, which are critical for the older population, also depend in large part on having intact cognition.

The reason a clinician needs to recognize different domains of cognition affected in patients with diabetes is to understand which self-care behavior will be affected in that individual. […] For example, a patient with memory problems may forget to take insulin doses, forget to take medications/insulin on time, or forget to eat on time. […] Cognitively impaired patients using insulin are more likely to not know what to do in the event of low blood glucose or how to manage medication on sick days (34). Patients with diminished mental flexibility and processing speed may do well with a simple regimen but may fail if the regimen is too complex. In general, older patients with diabetes with cognitive dysfunction are less likely to be involved in diabetes self-care and glucose monitoring compared with age-matched control subjects (35). […] Other comorbidities associated with aging and diabetes also add to the burden of cognitive impairment and its impact on self-care abilities. For example, depression is associated with a greater decline in cognitive function in patients with type 2 diabetes (36). Depression also can independently negatively impact the motivation to practice self-care.”

“Recently, there is an increasing discomfort with the use of A1C as a sole parameter to define glycemic goals in the older population. Studies have shown that A1C values in the older population may not reflect the same estimated mean glucose as in the younger population. Possible reasons for this discrepancy are the commonly present comorbidities that impact red cell life span (e.g., anemia, uremia, renal dysfunction, blood transfusion, erythropoietin therapy) (45,46). In addition, A1C level does not reflect glucose excursions and variability. […] Thus, it is prudent to avoid A1C as the sole measure of glycemic goal in this population. […] In patients who need insulin therapy, simplification, also known as de-intensification of the regimen, is generally recommended in all frail patients, especially if they have cognitive dysfunction (37,49). However, the practice has not caught up with the recommendations as shown by large observational studies showing unnecessary intensive control in patients with diabetes and dementia (50–52).”

“With advances in the past few decades, we now see a larger number of patients with type 1 diabetes who are aging successfully and facing the new challenges that aging brings. […] Patients with type 1 diabetes are typically proactive in their disease management and highly disciplined. Cognitive dysfunction in these patients creates significant distress for the first time in their lives; they suddenly feel a “lack of control” over the disease they have managed for many decades. The addition of autonomic dysfunction, gastropathy, or neuropathy may result in wider glucose excursions. These patients are usually more afraid of hyperglycemia than hypoglycemia — both of which they have managed for many years. However, cognitive dysfunction in older adults with type 1 diabetes has been found to be associated with hypoglycemic unawareness and glucose variability (5), which in turn increases the risk of severe hypoglycemia (54). The need for goal changes to avoid hypoglycemia and accept some hyperglycemia can be very difficult for many of these patients.”

2. Trends in Drug Utilization, Glycemic Control, and Rates of Severe Hypoglycemia, 2006–2013.

“From 2006 to 2013, use increased for metformin (from 47.6 to 53.5%), dipeptidyl peptidase 4 inhibitors (0.5 to 14.9%), and insulin (17.1 to 23.0%) but declined for sulfonylureas (38.8 to 30.8%) and thiazolidinediones (28.5 to 5.6%; all P < 0.001). […] The overall rate of severe hypoglycemia remained the same (1.3 per 100 person-years; P = 0.72), declined modestly among the oldest patients (from 2.9 to 2.3; P < 0.001), and remained high among those with two or more comorbidities (3.2 to 3.5; P = 0.36). […] During the recent 8-year period, the use of glucose-lowering drugs has changed dramatically among patients with T2DM. […] The use of older classes of medications, such as sulfonylureas and thiazolidinediones, declined. During this time, glycemic control of T2DM did not improve in the overall population and remained poor among nearly a quarter of the youngest patients. Rates of severe hypoglycemia remained largely unchanged, with the oldest patients and those with multiple comorbidities at highest risk. These findings raise questions about the value of the observed shifts in drug utilization toward newer and costlier medications.”

“Our findings are consistent with a prior study of drug prescribing in U.S. ambulatory practice conducted from 1997 to 2012 (2). In that study, similar increases in DPP-4 inhibitor and insulin analog prescribing were observed; these changes were accompanied by a 61% increase in drug expenditures (2). Our study extends these findings to drug utilization and demonstrates that these increases occurred in all age and comorbidity subgroups. […] In contrast, metformin use increased only modestly between 2006 and 2013 and remained relatively low among older patients and those with two or more comorbidities. Although metformin is recommended as first-line therapy (26), it may be underutilized as the initial agent for the treatment of T2DM (27). Its use may be additionally limited by coexisting contraindications, such as chronic kidney disease (28).”

“The proportion of patients with a diagnosis of diabetes who did not fill any glucose-lowering medications declined slightly (25.7 to 24.1%; P < 0.001).”

That is, one in four people who had a diagnosis of type 2 diabetes were not taking any prescription drugs for their health condition. I wonder how many of those people have read wikipedia articles like this one

“When considering treatment complexity, the use of oral monotherapy increased slightly (from 24.3 to 26.4%) and the use of multiple (two or more) oral agents declined (from 33.0 to 26.5%), whereas the use of insulin alone and in combination with oral agents increased (from 6.0 to 8.5% and from 11.1 to 14.6%, respectively; all P values <0.001).”

“Between 1987 and 2011, per person medical spending attributable to diabetes doubled (4). More than half of the increase was due to prescription drug spending (4). Despite these spending increases and greater utilization of newly developed medications, we showed no concurrent improvements in overall glycemic control or the rates of severe hypoglycemia in our study. Although the use of newer and more expensive agents may have other important benefits (44), further studies are needed to define the value and cost-effectiveness of current treatment options.”

iii. Among Low-Income Respondents With Diabetes, High-Deductible Versus No-Deductible Insurance Sharply Reduces Medical Service Use.

“Using the 2011–2013 Medical Expenditure Panel Survey, bivariate and regression analyses were conducted to compare demographic characteristics, medical service use, diabetes care, and health status among privately insured adult respondents with diabetes, aged 18–64 years (N = 1,461) by lower (<200% of the federal poverty level) and higher (≥200% of the federal poverty level) income and deductible vs. no deductible (ND), low deductible ($1,000/$2,400) (LD), and high deductible (>$1,000/$2,400) (HD). The National Health Interview Survey 2012–2014 was used to analyze differences in medical debt and delayed/avoided needed care among adult respondents with diabetes (n = 4,058) by income. […] Compared with privately insured respondents with diabetes with ND, privately insured lower-income respondents with diabetes with an LD report significant decreases in service use for primary care, checkups, and specialty visits (27%, 39%, and 77% lower, respectively), and respondents with an HD decrease use by 42%, 65%, and 86%, respectively. Higher-income respondents with an LD report significant decreases in specialty (28%) and emergency department (37%) visits.”

“The reduction in ambulatory visits made by lower-income respondents with ND compared with lower-income respondents with an LD or HD is far greater than for higher-income patients. […] The substantial reduction in checkup (preventive) and specialty visits by those with a lower income who have an HDHP [high-deductible health plan, US] implies a very different pattern of service use compared with lower-income respondents who have ND and with higher-income respondents. Though preventive visits require no out-of-pocket costs, reduced preventive service use with HDHPs is well established and might be the result of patients being unaware of this benefit or their concern about findings that could lead to additional expenses (31). Such sharply reduced service use by low-income respondents with diabetes may not be desirable. Patients with diabetes benefit from assessment of diabetes control, encouragement and reinforcement of behavior change and medication use, and early detection and treatment of diabetes complications or concomitant disease.”

iv. Long-term Mortality and End-Stage Renal Disease in a Type 1 Diabetes Population Diagnosed at Age 15–29 Years in Norway.

OBJECTIVE To study long-term mortality, causes of death, and end-stage renal disease (ESRD) in people diagnosed with type 1 diabetes at age 15–29 years.

RESEARCH DESIGN AND METHODS This nationwide, population-based cohort with type 1 diabetes diagnosed during 1978–1982 (n = 719) was followed from diagnosis until death, emigration, or September 2013. Linkages to the Norwegian Cause of Death Registry and the Norwegian Renal Registry provided information on causes of death and whether ESRD was present.

RESULTS During 30 years’ follow-up, 4.6% of participants developed ESRD and 20.6% (n = 148; 106 men and 42 women) died. Cumulative mortality by years since diagnosis was 6.0% (95% CI 4.5–8.0) at 10 years, 12.2% (10.0–14.8) at 20 years, and 18.4% (15.8–21.5) at 30 years. The SMR [standardized mortality ratio] was 4.4 (95% CI 3.7–5.1). Mean time from diagnosis of diabetes to ESRD was 23.6 years (range 14.2–33.5). Death was caused by chronic complications (32.2%), acute complications (20.5%), violent death (19.9%), or any other cause (27.4%). Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).

CONCLUSIONS The cumulative incidence of ESRD was low in this cohort with type 1 diabetes followed for 30 years. Mortality was 4.4 times that of the general population, and more than 50% of all deaths were caused by acute or chronic complications. A relatively high proportion of deaths were related to alcohol.”

Some additional observations from the paper:

“Studies assessing causes of death in type 1 diabetes are most frequently conducted in individuals diagnosed during childhood (17) or without evaluating the effect of age at diagnosis (8,9). Reports on causes of death in cohorts of patients diagnosed during late adolescence or young adulthood, with long-term follow-up, are less frequent (10). […] Adherence to treatment during this age is poor and the risk of acute diabetic complications is high (1316). Mortality may differ between those with diabetes diagnosed during this period of life and those diagnosed during childhood.”

“Mortality was between four and five times higher than in the general population […]. The excess mortality was similar for men […] and women […]. SMR was higher in the lower age bands — 6.7 (95% CI 3.9–11.5) at 15–24 years and 7.3 (95% CI 5.2–10.1) at 25–34 years — compared with the higher age bands: 3.7 (95% CI 2.7–4.9) at 45–54 years and 3.9 (95% CI 2.6–5.8) at 55–65 years […]. The Cox regression model showed that the risk of death increased significantly by age at diagnosis (HR 1.1; 95% CI 1.1–1.2; P < 0.001) and was eight to nine times higher if ESRD was present (HR 8.7; 95% CI 4.8–15.5; P < 0.0001). […] the underlying cause of death was diabetes in 57 individuals (39.0%), circulatory in 22 (15.1%), cancer in 18 (12.3%), accidents or intoxications in 20 (13.7%), suicide in 8 (5.5%), and any other cause in 21 (14.4%) […] In addition, diabetes contributed to death in 29.5% (n = 43) and CVD contributed to death in 10.9% (n = 29) of the 146 cases. Diabetes was mentioned on the death certificate for 68.2% of the cohort but for only 30.0% of the violent deaths. […] In 60% (88/146) of the cases the review committee considered death to be related to diabetes, whereas in 40% (58/146) the cause was unrelated to diabetes or had an unknown relation to diabetes. According to the clinical committee, acute complications caused death in 20.5% (30/146) of the cases; 20 individuals died as a result of DKA and 10 from hypoglycemia. […] Chronic complications caused the largest proportion of deaths (47/146; 32.2%) and increased with increasing duration of diabetes […]. Among individuals dying as a result of chronic complications (n = 47), CVD caused death in 94% (n = 44) and renal failure in 6% (n = 3). ESRD contributed to death in 22.7% (10/44) of those dying from CVD. Cardiovascular death occurred at mortality rates seven times higher than those in the general population […]. ESRD caused or contributed to death in 13 of 14 cases, when present.”

“Violence (intoxications, accidents, and suicides) was the leading cause of death before 10 years’ duration of diabetes; thereafter it was only a minor cause […] Insulin was used in two of seven suicides. […] According to the available medical records and autopsy reports, about 20% (29/146) of the deceased misused alcohol. In 15% (22/146) alcohol-related ICD-10 codes were listed on the death certificate (18% [19/106] of men and 8% [3/40] of women). In 10 cases the cause of death was uncertain but considered to be related to alcohol or diabetes […] The SMR for alcohol-related death was high when considering the underlying cause of death (5.0; 95% CI 2.5–10.0), and even higher when considering all alcohol-related ICD-10 codes listed on the death certificate (6.8; 95% CI 4.5–10.3). The cause of death was associated with alcohol in 21.8% (19/87) of those who died with less than 20 years’ diabetes duration. Drug abuse was noted on the death certificate in only two cases.”

“During follow-up, 33 individuals (4.6%; 22 men and 11 women) developed ESRD as a result of diabetic nephropathy. Mean time from diagnosis of diabetes to ESRD was 23.6 years (range 14.2–33.5 years). Cumulative incidence of ESRD by years since diagnosis of diabetes was 1.4% (95% CI 0.7–2.7) at 20 years and 4.8% (95% CI 3.4–6.9) at 30 years.”

“This study highlights three important findings. First, among individuals who were diagnosed with type 1 diabetes in late adolescence and early adulthood and had good access to health care, and who were followed for 30 years, mortality was four to five times that of the general population. Second, 15% of all deaths were associated with alcohol, and the SMR for alcohol-related deaths was 6.8. Third, there was a relatively low cumulative incidence of ESRD (4.8%) 30 years after the diagnosis of diabetes.

We report mortality higher than those from a large, population-based study from Finland that found cumulative mortality around 6% at 20 years’ and 15% at 30 years’ duration of diabetes among a population with age at onset and year of diagnosis similar to those in our cohort (10). The corresponding numbers in our cohort were 12% and 18%, respectively; the discrepancy was particularly high at 20 years. The SMR in the Finnish cohort was lower than that in our cohort (2.6–3.0 vs. 3.7–5.1), and those authors reported the SMR to be lower in late-onset diabetes (at age 15–29 years) compared with early-onset diabetes (at age 23). The differences between the Norwegian and Finnish data are difficult to explain since both reports are from countries with good access to health care and a high incidence of type 1 diabetes.”

However the reason for the somewhat different SMRs in these two reasonably similar countries may actually be quite simple – the important variable may be alcohol:

“Finland and Norway are appropriate to compare because they share important population and welfare characteristics. There are, however, significant differences in drinking levels and alcohol-related mortality: the Finnish population consumes more alcohol and the Norwegian population consumes less. The mortality rates for deaths related to alcohol are about three to four times higher in Finland than in Norway (30). […] The markedly higher SMR in our cohort can probably be explained by the lower mortality rates for alcohol-related mortality in the general population. […] In conclusion, the high mortality reported in this cohort with an onset of diabetes in late adolescence and young adulthood draws attention to people diagnosed during a vulnerable period of life. Both acute and chronic complications cause substantial premature mortality […] Our study suggests that increased awareness of alcohol-related death should be encouraged in clinics providing health care to this group of patients.”

April 23, 2017 Posted by | Diabetes, Economics, Epidemiology, Health Economics, Medicine, Nephrology, Neurology, Papers, Pharmacology, Psychology | Leave a comment

A few autism papers

i. The anterior insula in autism: Under-connected and under-examined.

“While the past decade has witnessed a proliferation of neuroimaging studies of autism, theoretical approaches for understanding systems-level brain abnormalities remain poorly developed. We propose a novel anterior insula-based systems-level model for investigating the neural basis of autism, synthesizing recent advances in brain network functional connectivity with converging evidence from neuroimaging studies in autism. The anterior insula is involved in interoceptive, affective and empathic processes, and emerging evidence suggests it is part of a “salience network” integrating external sensory stimuli with internal states. Network analysis indicates that the anterior insula is uniquely positioned as a hub mediating interactions between large-scale networks involved in externally- and internally-oriented cognitive processing. A recent meta-analysis identifies the anterior insula as a consistent locus of hypoactivity in autism. We suggest that dysfunctional anterior insula connectivity plays an important role in autism. […]

Increasing evidence for abnormal brain connectivity in autism comes from studies using functional connectivity measures […] These findings support the hypothesis that under-connectivity between specific brain regions is a characteristic feature of ASD. To date, however, few studies have examined functional connectivity within and between key large-scale canonical brain networks in autism […] The majority of published studies to date have examined connectivity of specific individual brain regions, without a broader theoretically driven systems-level approach.

We propose that a systems-level approach is critical for understanding the neurobiology of autism, and that the anterior insula is a key node in coordinating brain network interactions, due to its unique anatomy, location, function, and connectivity.”

ii. Romantic Relationships and Relationship Satisfaction Among Adults With Asperger Syndrome and High‐Functioning Autism.

“Participants, 31 recruited via an outpatient clinic and 198 via an online survey, were asked to answer a number of self-report questionnaires. The total sample comprised 229 high-functioning adults with ASD (40% males, average age: 35 years). […] Of the total sample, 73% indicated romantic relationship experience and only 7% had no desire to be in a romantic relationship. ASD individuals whose partner was also on the autism spectrum were significantly more satisfied with their relationship than those with neurotypical partners. Severity of autism, schizoid symptoms, empathy skills, and need for social support were not correlated with relationship status. […] Our findings indicate that the vast majority of high-functioning adults with ASD are interested in romantic relationships.”

Those results are very different from other results in the field – for example: “[a] meta-analysis of follow-up studies examining outcomes of ASD individuals revealed that, [o]n average only 14% of the individuals included in the reviewed studies were married or ha[d] a long-term, intimate relationship (Howlin, 2012)” – and one major reason is that they only include high-functioning autistics. I feel sort of iffy about the validity of the selection method used for procuring the online sample, this may also be a major factor (almost one third of them had a university degree so this is definitely not a random sample of high-functioning autistics; ‘high-functioning’ autistics are not that high-functioning in the general setting. Also, the sex ratio is very skewed as 60% of the participants in the study were female. A sex ratio like that may not sound like a big problem, but it is a major problem because a substantial majority of individuals with mild autism are males. Whereas the sex ratio is almost equal in the context of syndromic ASD, non-syndromic ASD is much more prevalent in males, with sex ratios approaching 1:7 in milder cases (link). These people are definitely looking at the milder cases, which means that a sample which skews female will not be remotely similar to most random samples of such individuals taken in the community setting. And this matters because females do better than males. A discussion can be had about to which extent women are under-diagnosed, but I have not seen data convincing me this is a major problem. It’s important to keep in mind in that context that the autism diagnosis is not based on phenotype alone, but on a phenotype-environment interaction; if you have what might be termed ‘an autistic phenotype’ but you are not suffering any significant ill effects as a result of this because you’re able to compensate relatively well (i.e. you are able to handle ‘the environment’ reasonably well despite the neurological makeup you’ve ended up with), you should not get an autism diagnosis – a diagnostic requirement is ‘clinically significant impairment in functioning’.

Anyway some more related data from the publication:

“Studies that analyze outcomes exclusively for ASD adults without intellectual impairment are rare. […] Engström, Ekström, and Emilsson (2003) recruited previous patients with an ASD diagnosis from four psychiatric clinics in Sweden. They reported that 5 (31%) of 16 adults with ASD had ”some form of relation with a partner.” Hofvander et al. (2009) analyzed data from 122 participants who had been referred to outpatient clinics for autism diagnosis. They found that 19 (16%) of all participants had lived in a long-term relationship.
Renty and Roeyers (2006) […] reported that at the time of the[ir] study 19% of 58 ASD adults had a romantic relationship and 8.6% were married or living with a partner. Cederlund, Hagberg, Billstedt, Gillberg, and Gillberg (2008) conducted a follow-up study of male individuals (aged 16–36 years) who had been diagnosed with Asperger syndrome at least 5 years before. […] at the time of the study, three (4%) [out of 76 male ASD individuals] of them were living in a long-term romantic relationship and 10 (13%) had had romantic relationships in the past.”

A few more data and observations from the study:

“A total of 166 (73%) of the 229 participants endorsed currently being in a romantic relationship or having a history of being in a relationship; 100 (44%) reported current involvement in a romantic relationship; 66 (29%) endorsed that they were currently single but have a history of involvement in a romantic relationship; and 63 (27%) participants did not have any experience with romantic relationships. […] Participants without any romantic relationship experience were significantly more likely to be male […] According to participants’ self-report, one fifth (20%) of the 100 participants who were currently involved in a romantic relationship were with an ASD partner. […] Of the participants who were currently single, 65% said that contact with another person was too exhausting for them, 61% were afraid that they would not be able to fulfil the expectations of a romantic partner, and 57% said that they did not know how they could find and get involved with a partner; and 50% stated that they did not know how a romantic relationship works or how they would be expected to behave in a romantic relationship”

“[P]revious studies that exclusively examined adults with ASD without intellectual impairment reported lower levels of romantic relationship experience than the current study, with numbers varying between 16% and 31% […] The results of our study can be best compared with the results of Hofvander et al. (2009) and Renty and Roeyers (2006): They selected their samples […] using methods that are comparable to ours. Hofvander et al. (2009) found that 16% of their participants have had romantic relationship experience in the past, compared to 29% in our sample; and Renty and Roeyers (2006) report that 28% of their participants were either married or engaged in a romantic relationship at the time of their study, compared to 44% in our study. […] Compared to typically developed individuals the percentage of ASD individuals with a romantic relationship partner is relatively low (Weimann, 2010). In the group aged 27–59 years, 68% of German males live together with a partner, 27% are single, and 5% still live with their parents. In the same age group, 73% of all females live with a partner, 26% live on their own, and 2% still live with their parents.”

“As our results show, it is not the case that male ASD individuals do not feel a need for romantic relationships. In fact, the contrary is true. Single males had a greater desire to be in a romantic relationship than single females, and males were more distressed than females about not being in a romantic relationship.” (…maybe in part because the females who were single were more likely than the males who were single to be single by choice?)

“Our findings showed that being with a partner who also has an ASD diagnosis makes a romantic relationship more satisfying for ASD individuals. None of the participants, who had been with a partner in the past but then separated, had been together with an ASD partner. This might indicate that once a person with ASD has found a partner who is also on the spectrum, a relationship might be very stable and long lasting.”

Reward Processing in Autism.

“The social motivation hypothesis of autism posits that infants with autism do not experience social stimuli as rewarding, thereby leading to a cascade of potentially negative consequences for later development. […] Here we use functional magnetic resonance imaging to examine social and monetary rewarded implicit learning in children with and without autism spectrum disorders (ASD). Sixteen males with ASD and sixteen age- and IQ-matched typically developing (TD) males were scanned while performing two versions of a rewarded implicit learning task. In addition to examining responses to reward, we investigated the neural circuitry supporting rewarded learning and the relationship between these factors and social development. We found diminished neural responses to both social and monetary rewards in ASD, with a pronounced reduction in response to social rewards (SR). […] Moreover, we show a relationship between ventral striatum activity and social reciprocity in TD children. Together, these data support the hypothesis that children with ASD have diminished neural responses to SR, and that this deficit relates to social learning impairments. […] When we examined the general neural response to monetary and social reward events, we discovered that only TD children showed VS [ventral striatum] activity for both reward types, whereas ASD children did not demonstrate a significant response to either monetary or SR. However, significant between-group differences were shown only for SR, suggesting that children with ASD may be specifically impaired on processing SR.”

I’m not quite sure I buy that the methodology captures what it is supposed to capture (“The SR feedback consisted of a picture of a smiling woman with the words “That’s Right!” in green text for correct trials and a picture of the same woman with a sad face along with the words “That’s Wrong” in red text for incorrect trials”) (this is supposed to be the ‘social reward feedback’), but on the other hand: “The chosen reward stimuli, faces and coins, are consistent with those used in previous studies of reward processing” (so either multiple studies are of dubious quality, or this kind of method actually ‘works’ – but I don’t know enough about the field to tell which of the two conclusions apply).

iv. The Social Motivation Theory of Autism.

“The idea that social motivation deficits play a central role in Autism Spectrum Disorders (ASD) has recently gained increased interest. This constitutes a shift in autism research, which has traditionally focused more intensely on cognitive impairments, such as Theory of Mind deficits or executive dysfunction, while granting comparatively less attention to motivational factors. This review delineates the concept of social motivation and capitalizes on recent findings in several research areas to provide an integrated picture of social motivation at the behavioral, biological and evolutionary levels. We conclude that ASD can be construed as an extreme case of diminished social motivation and, as such, provides a powerful model to understand humans’ intrinsic drive to seek acceptance and avoid rejection.”

v. Stalking, and Social and Romantic Functioning Among Adolescents and Adults with Autism Spectrum Disorder.

“We examine the nature and predictors of social and romantic functioning in adolescents and adults with ASD. Parental reports were obtained for 25 ASD adolescents and adults (13-36 years), and 38 typical adolescents and adults (13-30 years). The ASD group relied less upon peers and friends for social (OR = 52.16, p < .01) and romantic learning (OR = 38.25, p < .01). Individuals with ASD were more likely to engage in inappropriate courting behaviours (χ2 df = 19 = 3168.74, p < .001) and were more likely to focus their attention upon celebrities, strangers, colleagues, and ex-partners (χ2 df = 5 =2335.40, p < .001), and to pursue their target longer than controls (t = -2.23, df = 18.79, p < .05).”

“Examination of relationships the individuals were reported to have had with the target of their social or romantic interest, indicated that ASD adolescents and adults sought to initiate fewer social and romantic relationships but across a wider variety of people, such as strangers, colleagues, acquaintances, friends, ex-partners, and celebrities. […] typically developing peers […] were more likely to target colleagues, acquaintances, friends, and ex-partners in their relationship attempts, whilst the ASD group targeted these less frequently than expected, and attempted to initiate relationships significantly more frequently than is typical, with strangers and celebrities. […] In attempting to pursue and initiate social and romantic relationships, the ASD group were reported to display a much wider variety of courtship behaviours than the typical group. […] ASD adolescents and adults were more likely to touch the person of interest inappropriately, believe that the target must reciprocate their feelings, show obsessional interest, make inappropriate comments, monitor the person’s activities, follow them, pursue them in a threatening manner, make threats against the person, and threaten self-harm. ASD individuals displayed the majority of the behaviours indiscriminately across all types of targets. […] ASD adolescents and adults were also found […] to persist in their relationship pursuits for significantly longer periods of time than typical adolescents and adults when they received a negative or no response from the person or their family.”

April 4, 2017 Posted by | autism, Neurology, Papers, Psychology | Leave a comment