Econstudentlog

Principles of memory (III)

I have added a few observations from the last part of the book below. This is a short post, but I was getting tired of the lack of (front-page) updates – I do apologize to the (…fortunately very-) few people who might care about this for the infrequent updates lately, I hope to have more energy for blogging in the weeks to come.

The relative distinctiveness principle states that performance on memory tasks depends on how much an item stands out “relative to the background of alternatives from which it must be distinguished” at the time of retrieval […]. It is important to keep in mind that one of the determinants of whether an item is more or less distinct at the time of retrieval, relative to other items, is the relation between the conditions at encoding and those at retrieval (the encoding retrieval principle). […] There has been a long history of distinctiveness as an important concept and research topic in memory, with numerous early studies examining both “distinctiveness” and “vividness” […]. Perhaps the most well-known example of distinctiveness is the isolation or von Restorff effect […] In the typical isolation experiment, thereare two types of lists. The control list features words presented (for example) in black against a white background. The experimental list is identical to the control list in all respects except that one of the items is made to stand out in some fashion: it could be shown in green instead of black, in a larger font size, with a large red frame around it, the word trout could appear in a list in which all the other items are names of different vegetables, or any number of similar manipulations. The much-replicated result is that the unique item (the isolate) is remembered better than the item in the same position in the control list. […] The von Restorff effect is reliably obtained with many kinds of tests, including delayed free recall, serial recall, serial learning, recognition, and many others. It is also readily observable with nonverbal stimuli”.

There have been many mathematical and computational models based on the idea of distinctiveness (see Neath & Brown, 2007, for a review). Here, we focus on one particular model that includes elements from many of the earlier models […]. Brown, Neath, and Chater (2007 […]) [Here’s a link to the paper, US] proposed a model called SIMPLE, which stands for Scale Independent Memory and Perceptual Learning. The model is scale independent in the sense that it applies equally to short-term/working memory as long-term memory: the time scale is irrelevant to the operation of the model. The basic idea is that memory is best conceived as a discrimination task. Items are represented as occupying positions along one or more dimensions and, in general, those items with fewer close neighbors on the relevant dimensions at the time of retrieval will be more likely to be recalled that items with more close neighbors. […] According to SIMPLE, […] not only is the isolation effect due to relative distinctiveness, but the general shape of the serial position curve is due to relative distinctiveness. In general, free and serial recall produce a function in which the first few items are well-recalled (the primacy effect) and the last few items are well recalled (the recency effect), but items in the middle are recalled less well. The magnitude of primacy and recency effects can be affected by many different manipulations, and depending on the experimental design, one can observe almost all possibilities, from almost all primacy to almost all recency. The thing that all have in common, however, is that the experimental manipulation has caused some items to be more distinct at the time of retrieval than other items. […] It has been claimed that distinctiveness effects are observed only in explicit memory […] We suggest that the search for distinctiveness effects in implicit memory tasks is similar to the search for interference in implicit memory tasks […]: What is important is whether the information that is supposed to be distinct (in the former case) or interfere (in the latter) is relevant to the task. If it is not relevant, then the effects will not be seen.”

“[O]lder adults perform worse than younger adults on free recall but not on recognition (Craik & McDowd, 1987), even though both tasks are considered to tap “episodic” or “explicit” memory.2 Performance on another episodic memory task, cued recall, falls in between recall and recognition, although the size of the age-related difference varies […]. Older adults also experience more tip-of-the-tongue events than younger adults […] and have more word-finding difficulties in confrontation naming tasks in which, for example, a drawing is shown and the subject is asked to name the object […]. [I]tem recall is not as affected by aging as order recall […] in comparison to younger adults, older adults have more difficulty remembering the source of a memory […] In addition to recalling fewer correct items than younger adults (errors of omission), older adults are also more likely than younger adults to produce false positive responses in a variety of paradigms (errors of commission) […] Similarly, older adults are more likely than younger to report that an imagined event was real […] Due perhaps to reduced cognitive capacity older adults may appear to encode fewer specific details at encoding and thus are less able to take advantage of specific cues at retrieval. […] older adults and individuals with amnesia perform quite well on tasks that are supported by generic processing but less well (compared to younger adults) on those that require the recollection of a specific item or event. […] when older adults are asked to recall events from their own lives, they recall more from the period when they were approximately 15 to 25 years of age than they do from the period when they were approximately 30 to 40 years of age.” [Here’s incidentally an example paper exploring some of these topics in more detail: Modeling age-related differences in immediate memory using SIMPLE. As should be obvious from the title, the paper relates to the SIMPLE model discussed in the previous paragraph, US]

There is a large literature on the relation between attention and memory, and many times “memory” is used when a perhaps more accurate term is “attention” (see, for example, Baddeley, 1993; Cowan, 1995). […] As yet, we have no principles linking memory and attention.”

Forgetting is due to extrinsic factors; in particular, items that have more close neighbors in the region of psychological space at the time of retrieval are less likely to be remembered than items with fewer close neighbors […]. In addition, tasks that require specific information about the context in which memories were formed seem to be more vulnerable to interference or forgetting at the time of the retrieval attempt than those that can rely on more general information […]. Taken together, these principles suggest that the search for the forgetting function is not likely to be successful. […] a failure to remember can be due to multiple different causes, much like the failure of a car can be due to multiple different causes.” (Do here also keep in mind the comments included on this topic in the last paragraph of my first post about the book, US)

Advertisements

November 13, 2018 Posted by | Books, Psychology | Leave a comment

Perception (I)

Here’s my short goodreads review of the book. In this post I’ll include some observations and links related to the first half of the book’s coverage.

“Since the 1960s, there have been many attempts to model the perceptual processes using computer algorithms, and the most influential figure of the last forty years has been David Marr, working at MIT. […] Marr and his colleagues were responsible for developing detailed algorithms for extracting (i) low-level information about the location of contours in the visual image, (ii) the motion of those contours, and (iii) the 3-D structure of objects in the world from binocular disparities and optic flow. In addition, one of his lasting achievements was to encourage researchers to be more rigorous in the way that perceptual tasks are described, analysed, and formulated and to use computer models to test the predictions of those models against human performance. […] Over the past fifteen years, many researchers in the field of perception have characterized perception as a Bayesian process […] According to Bayesian theory, what we perceive is a consequence of probabilistic processes that depend on the likelihood of certain events occurring in the particular world we live in. Moreover, most Bayesian models of perceptual processes assume that there is noise in the sensory signals and the amount of noise affects the reliability of those signals – the more noise, the less reliable the signal. Over the past fifteen years, Bayes theory has been used extensively to model the interaction between different discrepant cues, such as binocular disparity and texture gradients to specify the slant of an inclined surface.”

“All surfaces have the property of reflectance — that is, the extent to which they reflect (rather than absorb) the incident illumination — and those reflectances can vary between 0 per cent and 100 per cent. Surfaces can also be selective in the particular wavelengths they reflect or absorb. Our colour vision depends on these selective reflectance properties […]. Reflectance characteristics describe the physical properties of surfaces. The lightness of a surface refers to a perceptual judgement of a surface’s reflectance characteristic — whether it appears as black or white or some grey level in between. Note that we are talking about the perception of lightness — rather than brightness — which refers to our estimate of how much light is coming from a particular surface or is emitted by a source of illumination. The perception of surface lightness is one of the most fundamental perceptual abilities because it allows us not only to differentiate one surface from another but also to identify the real-world properties of a particular surface. Many textbooks start with the observation that lightness perception is a difficult task because the amount of light reflected from a particular surface depends on both the reflectance characteristic of the surface and the intensity of the incident illumination. For example, a piece of black paper under high illumination will reflect back more light to the eye than a piece of white paper under dim illumination. As a consequence, lightness constancy — the ability to correctly judge the lightness of a surface under different illumination conditions — is often considered to be an ‘achievement’ of the perceptual system. […] The alternative starting point for understanding lightness perception is to ask whether there is something that remains constant or invariant in the patterns of light reaching the eye with changes of illumination. In this case, it is the relative amount of light reflected off different surfaces. Consider two surfaces that have different reflectances—two shades of grey. The actual amount of light reflected off each of the surfaces will vary with changes in the illumination but the relative amount of light reflected off the two surfaces remains the same. This shows that lightness perception is necessarily a spatial task and hence a task that cannot be solved by considering one particular surface alone. Note that the relative amount of light reflected off different surfaces does not tell us about the absolute lightnesses of different surfaces—only their relative lightnesses […] Can our perception of lightness be fooled? Yes, of course it can and the ways in which we make mistakes in our perception of the lightnesses of surfaces can tell us much about the characteristics of the underlying processes.”

“From a survival point of view, the ability to differentiate objects and surfaces in the world by their ‘colours’ (spectral reflectance characteristics) can be extremely useful […] Most species of mammals, birds, fish, and insects possess several different types of receptor, each of which has a a different spectral sensitivity function […] having two types of receptor with different spectral sensitivities is the minimum necessary for colour vision. This is referred to as dicromacy and the majority of mammals are dichromats with the exception of the old world monkeys and humans. […] The only difference between lightness and colour perception is that in the latter case we have to consider the way a surface selectively reflects (and absorbs) different wavelengths, rather than just a surface’s average reflectance over all wavelengths. […] The similarities between the tasks of extracting lightness and colour information mean that we can ask a similar question about colour perception [as we did about lightness perception] – what is the invariant information that could specify the reflectance characteristic of a surface? […] The information that is invariant under changes of spectral illumination is the relative amounts of long, medium, and short wavelength light reaching our eyes from different surfaces in the scene. […] the successful identification and discrimination of coloured surfaces is dependent on making spatial comparisons between the amounts of short, medium, and long wavelength light reaching our eyes from different surfaces. As with lightness perception, colour perception is necessarily a spatial task. It follows that if a scene is illuminated by the light of just a single wavelength, the appropriate spatial comparisons cannot be made. This can be demonstrated by illuminating a real-world scene containing many different coloured objects with yellow, sodium light that contains only a single wavelength. All objects, whatever their ‘colours’, will only reflect back to the eye different intensities of that sodium light and hence there will only be absolute but no relative differences between the short, medium, and long wavelength lightness records. There is a similar, but less dramatic, effect on our perception of colour when the spectral characteristics of the illumination are restricted to just a few wavelengths, as is the case with fluorescent lighting.”

“Consider a single receptor mechanism, such as a rod receptor in the human visual system, that responds to a limited range of wavelengths—referred to as the receptor’s spectral sensitivity function […]. This hypothetical receptor is more sensitive to some wavelengths (around 550 nm) than others and we might be tempted to think that a single type of receptor could provide information about the wavelength of the light reaching the receptor. This is not the case, however, because an increase or decrease in the response of that receptor could be due to either a change in the wavelength or an increase or decrease in the amount of light reaching the receptor. In other words, the output of a given receptor or receptor type perfectly confounds changes in wavelength with changes in intensity because it has only one way of responding — that is, more or less. This is Rushton’s Principle of Univariance — there is only one way of varying or one degree of freedom. […] On the other hand, if we consider a visual system with two different receptor types, one more sensitive to longer wavelengths (L) and the other more sensitive to shorter wavelengths (S), there are two degrees of freedom in the system and thus the possibility of signalling our two independent variables — wavelength and intensity […] it is quite possible to have a colour visual system that is based on just two receptor types. Such a colour visual system is referred to as dichromatic.”

“So why is the human visual system trichromatic? The answer can be found in a phenomenon known as metamerism. So far, we have restricted our discussion to the effect of a single wavelength on our dichromatic visual system: for example, a single wavelength of around 550 nm that stimulated both the long and short receptor types about equally […]. But what would happen if we stimulated our dichromatic system with light of two different wavelengths at the same time — one long wavelength and one short wavelength? With a suitable choice of wavelengths, this combination of wavelengths would also have the effect of stimulating the two receptor types about equally […] As a consequence, the output of the system […] with this particular mixture of wavelengths would be indistinguishable from that created by the single wavelength of 550 nm. These two indistinguishable stimulus situations are referred to as metamers and a little thought shows that there would be many thousands of combinations of wavelength that produce the same activity […] in a dichromatic visual system. As a consequence, all these different combinations of wavelengths would be indistinguishable to a dichromatic observer, even though they were produced by very different combinations of wavelengths. […] Is there any way of avoiding the problem of metamerism? The answer is no but we can make things better. If a visual system had three receptor types rather than two, then many of the combinations of wavelengths that produce an identical pattern of activity in two of the mechanisms (L and S) would create a different amount of activity in our third receptor type (M) that is maximally sensitive to medium wavelengths. Hence the number of indistinguishable metameric matches would be significantly reduced but they would never be eliminated. Using the same logic, it follows that a further increase in the number of receptor types (beyond three) would reduce the problem of metamerism even more […]. There would, however, also be a cost. Having more distinct receptor types in a finite-sized retina would increase the average spacing between the receptors of the same type and thus make our acuity for fine detail significantly poorer. There are many species, such as dragonflies, with more than three receptor types in their eyes but the larger number of receptor types typically serves to increase the range of wavelengths to which the animal is sensitive into the infra-red or ultra-violet parts of the spectrum, rather than to reduce the number of metamers. […] the sensitivity of the short wavelength receptors in the human eye only extends to ~540 nm — the S receptors are insensitive to longer wavelengths. This means that human colour vision is effectively dichromatic for combinations of wavelengths above 540 nm. In addition, there are no short wavelength cones in the central fovea of the human retina, which means that we are also dichromatic in the central part of our visual field. The fact that we are unaware of this lack of colour vision is probably due to the fact that our eyes are constantly moving. […] It is […] important to appreciate that the description of the human colour visual system as trichromatic is not a description of the number of different receptor types in the retina – it is a property of the whole visual system.”

“Recent research has shown that although the majority of humans are trichromatic there can be significant differences in the precise matches that individuals make when matching colour patches […] the absence of one receptor type will result in a greater number of colour confusions than normal and this does have a significant effect on an observer’s colour vision. Protanopia is the absence of long wavelength receptors, deuteranopia the absence of medium wavelength receptors, and tritanopia the absence of short wavelength receptors. These three conditions are often described as ‘colour blindness’ but this is a misnomer. We are all colour blind to some extent because we all suffer from colour metamerism and fail to make discriminations that would be very apparent to any biological or machine vision system with a greater number of receptor types. For example, most stomatopod crustaceans (mantis shrimps) have twelve different visual pigments and they also have the ability to detect both linear and circularly polarized light. What I find interesting is that we believe, as trichromats, that we have the ability to discriminate all the possible shades of colour (reflectance characteristics) that exist in our world. […] we are typically unaware of the limitations of our visual systems because we have no way of comparing what we see normally with what would be seen by a ‘better’ visual system.”

“We take it for granted that we are able to segregate the visual input into separate objects and distinguish objects from their backgrounds and we rarely make mistakes except under impoverished conditions. How is this possible? In many cases, the boundaries of objects are defined by changes of luminance and colour and these changes allow us to separate or segregate an object from its background. But luminance and colour changes are also present in the textured surfaces of many objects and therefore we need to ask how it is that our visual system does not mistake these luminance and colour changes for the boundaries of objects. One answer is that object boundaries have special characteristics. In our world, most objects and surfaces are opaque and hence they occlude (cover) the surface of the background. As a consequence, the contours of the background surface typically end—they are ‘terminated’—at the boundary of the occluding object or surface. Quite often, the occluded contours of the background are also revealed at the opposite side of the occluding surface because they are physically continuous. […] The impression of occlusion is enhanced if the occluded contours contain a range of different lengths, widths, and orientations. In the natural world, many animals use colour and texture to camouflage their boundaries as well as to fool potential predators about their identity. […] There is an additional source of information — relative motion — that can be used to segregate a visual scene into objects and their backgrounds and to break any camouflage that might exist in a static view. A moving, opaque object will progressively occlude and dis-occlude (reveal) the background surface so that even a well-camouflaged, moving animal will give away its location. Hence it is not surprising that a very common and successful strategy of many animals is to freeze in order not to be seen. Unless the predator has a sophisticated visual system to break the pattern or colour camouflage, the prey will remain invisible.”

Some links:

Perception.
Ames room. Inverse problem in optics.
Hermann von Helmholtz. Richard Gregory. Irvin Rock. James Gibson. David Marr. Ewald Hering.
Optical flow.
La dioptrique.
Necker cube. Rubin’s vase.
Perceptual constancy. Texture gradient.
Ambient optic array.
Affordance.
Luminance.
Checker shadow illusion.
Shape from shading/Photometric stereo.
Colour vision. Colour constancy. Retinex model.
Cognitive neuroscience of visual object recognition.
Motion perception.
Horace Barlow. Bernhard Hassenstein. Werner E. Reichardt. Sigmund Exner. Jan Evangelista Purkyně.
Phi phenomenon.
Motion aftereffect.
Induced motion.

October 14, 2018 Posted by | Biology, Books, Ophthalmology, Physics, Psychology | Leave a comment

Principles of memory (II)

I have added a few more quotes from the book below:

Watkins and Watkins (1975, p. 443) noted that cue overload is “emerging as a general principle of memory” and defined it as follows: “The efficiency of a functional retrieval cue in effecting recall of an item declines as the number of items it subsumes increases.” As an analogy, think of a person’s name as a cue. If you know only one person named Katherine, the name by itself is an excellent cue when asked how Katherine is doing. However, if you also know Cathryn, Catherine, and Kathryn, then it is less useful in specifying which person is the focus of the question. More formally, a number of studies have shown experimentally that memory performance systematically decreases as the number of items associated with a particular retrieval cue increases […] In many situations, a decrease in memory performance can be attributed to cue overload. This may not be the ultimate explanation, as cue overload itself needs an explanation, but it does serve to link a variety of otherwise disparate findings together.”

Memory, like all other cognitive processes, is inherently constructive. Information from encoding and cues from retrieval, as well as generic information, are all exploited to construct a response to a cue. Work in several areas has long established that people will use whatever information is available to help reconstruct or build up a coherent memory of a story or an event […]. However, although these strategies can lead to successful and accurate remembering in some circumstances, the same processes can lead to distortion or even confabulation in others […]. There are a great many studies demonstrating the constructive and reconstructive nature of memory, and the literature is quite well known. […] it is clear that recall of events is deeply influenced by a tendency to reconstruct them using whatever information is relevant and to repair holes or fill in the gaps that are present in memory with likely substitutes. […] Given that memory is a reconstructive process, it should not be surprising to find that there is a large literature showing that people have difficulty distinguishing between memories of events that happened and memories of events that did not happen […]. In a typical reality monitoring experiment […], subjects are shown pictures of common objects. Every so often, instead of a picture, the subjects are shown the name of an object and are asked to create a mental image of the object. The test involves presenting a list of object names, and the subject is asked to judge whether they saw the item (i.e., judge the memory as “real”) or whether they saw the name of the object and only imagined seeing it (i.e., judge the memory as “imagined”). People are more likely to judge imagined events as real than real events as imagined. The likelihood that a memory will be judged as real rather than imagined depends upon the vividness of the memory in terms of its sensory quality, detail, plausibility, and coherence […]. What this means is that there is not a firm line between memories for real and imagined events: if an imagined event has enough qualitative features of a real event it is likely to be judged as real.”

“One hallmark of reconstructive processes is that in many circumstances they aid in memory retrieval because they rely on regularities in the world. If we know what usually happens in a given circumstance, we can use that information to fill in gaps that may be present in our memory for that episode. This will lead to a facilitation effect in some cases but will lead to errors in cases in which the most probable response is not the correct one. However, if we take this standpoint, we must predict that the errors that are made when using reconstructive processes will not be random; in fact, they will display a bias toward the most likely event. This sort of mechanism has been demonstrated many times in studies of schema-based representations […], and language production errors […] but less so in immediate recall. […] Each time an event is recalled, the memory is slightly different. Because of the interaction between encoding and retrieval, and because of the variations that occur between two different retrieval attempts, the resulting memories will always differ, even if only slightly.”

In this chapter we discuss the idea that a task or a process can be a “pure” measure of memory, without contamination from other hypothetical memory stores or structures, and without contributions from other processes. Our impurity principle states that tasks and processes are not pure, and therefore one cannot separate out the contributions of different memory stores by using tasks thought to tap only one system; one cannot count on subjects using only one process for a particular task […]. Our principle follows from previous arguments articulated by Kolers and Roediger (1984) and Crowder (1993), among others, that because every event recruits slightly different encoding and retrieval processes, there is no such thing as “pure” memory. […] The fundamental issue is the extent to which one can determine the contribution of a particular memory system or structure or process to performance on a particular memory task. There are numerous ways of assessing memory, and many different ways of classifying tasks. […] For example, if you are given a word fragment and asked to complete it with the first word that pops in your head, you are free to try a variety of strategies. […] Very different types of processing can be used by subjects even when given the same type of test or cue. People will use any and all processes to help them answer a question.”

“A free recall test typically provides little environmental support. A list of items is presented, and the subject is asked to recall which items were on the list. […] The experimenter simply says, “Recall the words that were on the list,” […] A typical recognition test provides more environmental support. Although a comparable list of items might have been presented, and although the subject is asked again about memory for an item in context, the subject is provided with a more specific cue, and knows exactly how many items to respond to. Some tests, such as word fragment completion and general knowledge questions, offer more environmental support. These tests provide more targeted cues, and often the cues are unique […] One common processing distinction involves the aspects of the stimulus that are focused on or are salient at encoding and retrieval: Subjects can focus more on an item’s physical appearance (data driven processing) or on an item’s meaning (conceptually driven processing […]). In general, performance on tasks such as free recall that offer little environmental support is better if the rememberer uses conceptual rather than perceptual processing at encoding. Although there is perceptual information available at encoding, there is no perceptual information provided at test so data-driven processes tend not to be appropriate. Typical recognition and cued-recall tests provide more specific cues, and as such, data-driven processing becomes more appropriate, but these tasks still require the subject to discriminate which items were presented in a particular specific context; this is often better accomplished using conceptually driven processing. […] In addition to distinctions between data driven and conceptually driven processing, another common distinction is between an automatic retrieval process, which is usually referred to as familiarity, and a nonautomatic process, usually called recollection […]. Additional distinctions abound. Our point is that very different types of processing can be used by subjects on a particular task, and that tasks can differ from one another on a variety of different dimensions. In short, people can potentially use almost any combination of processes on any particular task.”

Immediate serial recall is basically synonymous with memory span. In one the first reviews of this topic, Blankenship (1938, p. 2) noted that “memory span refers to the ability of an individual to reproduce immediately, after one presentation, a series of discrete stimuli in their original order.”3 The primary use of memory span was not so much to measure the capacity of a short-term memory system, but rather as a measure of intellectual abilities […]. Early on, however, it was recognized that memory span, whatever it was, varied as function of a large number of variables […], and could even be increased substantially by practice […]. Nonetheless, memory span became increasingly seen as a measure of the capacity of a short-term memory system that was distinct from long-term memory. Generally, most individuals can recall about 7 ± 2 items (Miller, 1956) or the number of items that can be pronounced in about 2 s (Baddeley, 1986) without making any mistakes. Does immediate serial recall (or memory span) measure the capacity of short-term (or working) memory? The currently available evidence suggests that it does not. […] The main difficulty in attempting to construct a “pure” measure of immediate memory capacity is that […] the influence of previously acquired knowledge is impossible to avoid. There are numerous contributions of long-term knowledge not only to memory span and immediate serial recall […] but to other short-term tasks as well […] Our impurity principle predicts that when distinctions are made between types of processing (e.g., conceptually driven versus data driven; familiarity versus recollection; automatic versus conceptual; item specific versus relational), each of those individual processes will not be pure measures of memory.”

“Over the past 20 years great strides have been made in noninvasive techniques for measuring brain activity. In particular, PET and fMRI studies have allowed us to obtain an on-line glimpse into the hemodynamic changes that occur in the brain as stimuli are being processed, memorized, manipulated, and recalled. However, many of these studies rely on subtractive logic that explicitly assumes that (a) there are different brain areas (structures) subserving different cognitive processes and (b) we can subtract out background or baseline activity and determine which areas are responsible for performing a particular task (or process) by itself. There have been some serious challenges to these underlying assumptions […]. A basic assumption is that there is some baseline activation that is present all of the time and that the baseline is built upon by adding more activation. Thus, when the baseline is subtracted out, what is left is a relatively pure measure of the brain areas that are active in completing the higher-level task. One assumption of this method is that adding a second component to the task does not affect the simple task. However, this assumption does not always hold true. […] Even if the additive factors logic were correct, these studies often assume that a task is a pure measure of one process or another. […] Again, the point is that humans will utilize whatever resources they can recruit in order to perform a task. Individuals using different retrieval strategies (e.g., visualization, verbalization, lax or strict decision criteria, etc.) show very different patterns of brain activation even when performing the same memory task (Miller & Van Horn, 2007). This makes it extremely dangerous to assume that any task is made up of purely one process. Even though many researchers involved in neuroimaging do not make task purity assumptions, these examples “illustrate the widespread practice in functional neuroimaging of interpreting activations only in terms of the particular cognitive function being investigated (Cabeza et al., 2003, p. 390).” […] We do not mean to suggest that these studies have no value — they clearly do add to our knowledge of how cognitive functioning works — but, instead, would like to urge more caution in the interpretation of localization studies, which are sometimes taken as showing that an activated area is where some unique process takes place.”

October 6, 2018 Posted by | Biology, Books, Psychology | Leave a comment

Principles of memory (I)

This book was interesting, but it was more interesting to me on account of the fact that it’s telling you a lot about what sort of memory research has taken place over the years, than it was interesting on account of the authors having presented a good model of how this stuff works. It’s the sort of book that makes you think.

I found the book challenging to blog, for a variety of reasons, but I’ve tried adding some observations of interest from the first four chapters of the coverage below.

“[I]n over 100 years of scientific research on memory, and nearly 50 years after the so-called cognitive revolution, we have nothing that really constitutes a widely accepted and frequently cited law of memory, and perhaps only one generally accepted principle.5 However, there are a plethora of effects, many of which have extensive literatures and hundreds of published empirical demonstrations. One reason for the lack of general laws and principles of memory might be that none exists. Tulving (1985a, p. 385), for example, has argued that “no profound generalizations can be made about memory as a whole,”because memory comprises many different systems and each system operates according to different principles. One can make “general statements about particular kinds of memory,” but one cannot make statements that would apply to all types of memory. […] Roediger (2008) also argues that no general principles of memory exist, but his reasoning and arguments are quite different. He reintroduces Jenkins’ (1979) tetrahedral model of memory, which views all memory experiments as comprising four factors: encoding conditions, retrieval conditions, subject variables, and events (materials and tasks). Using the tetrahedral model as a starting point, Roediger convincingly demonstrates that all of these variables can affect memory performance in different ways and that such complexity does not easily lend itself to a description using general principles. Because of the complexity of the interactions among these variables, Roediger suggests that “the most fundamental principle of learning and memory, perhaps its only sort of general law, is that in making any generalization about memory one must add that ‘it depends'” (p. 247). […]  Where we differ is that we think it possible to produce general principles of memory that take into account these factors. […] The purpose of this monograph is to propose seven principles of human memory that apply to all memory regardless of the type of information, type of processing, hypothetical system supporting memory, or time scale. Although these principles focus on the invariants and empirical regularities of memory, the reader should be forewarned that they are qualitative rather than quantitative, more like regularities in biology than principles of geometry. […] Few, if any, of our principles are novel, and the list is by no means complete. We certainly do not think that there are only seven principles of memory nor, when more principles are proposed, do we think that all seven of our principles will be among the most important.7″

“[T]he two most popular contemporary ways of looking at memory are the multiple systems view and the process (or proceduralist) view.1 Although these views are not always necessarily diametrically opposed […], their respective research programs are focused on different questions and search for different answers. The fundamental difference between structural and processing accounts of memory is whether different rules apply as a function of the way information is acquired, the type of material learned, and the time scale, or whether these can be explained using a single set of principles. […] Proponents of the systems view of memory suggest that memory is divided into multiple systems. Thus, their endeavor is focused on discovering and defining different systems and describing how they work. A “system” within this sort of framework is a structure that is anatomically and evolutionarily distinct from other memory systems and differs in its “methods of acquisition, representation and expression of knowledge” […] Using a variety of techniques, including neuropsychological and statistical methods, advocates of the multiple systems approach […] have identified five major memory systems: procedural memory, the perceptual representation system (PRS), semantic memory, primary or working memory, and episodic memory. […] In general, three criticisms are raised most often: The systems approach (a) has no criteria that produce exactly five different memory systems, (b) relies to a large extent on dissociations, and (c) has great difficulty accounting for the pattern of results observed at both ends of the life span. […] The multiple systems view […] lacks a principled and consistent set of criteria for delineating memory systems. Given the current state of affairs, it is not unthinkable to postulate 5 or 10 or 20 or even more different memory systems […]. Moreover, the specific memory systems that have been identified can be fractionated further, resulting in a situation in which the system is distributed in multiple brain locations, depending on the demands of the task at hand. […] The major strength of the systems view is usually taken to be its ability to account for data from amnesic patients […]. Those individuals seem to have specific deficits in episodic memory (recall and recognition) with very few, if any, deficits in semantic memory, procedural memory, or the PRS. […] [But on the other hand] age-related differences in memory do not follow the pattern predicted by the systems view.”

“From our point of view, asking where memory is “located” in the brain is like asking where running is located in the body. There are certainly parts of the body that are more important (the legs) or less important (the little fingers) in performing the task of running but, in the end, it is an activity that requires complex coordination among a great many body parts and muscle groups. To extend the analogy, looking for differences between memory systems is like looking for differences between running and walking. There certainly are many differences, but the main difference is that running requires more coordination among the different body parts and can be disrupted by small things (such as a corn on the toe) that may not interfere with walking at all. Are we to conclude, then, that running is located in the corn on your toe? […] although there is little doubt that more primitive functions such as low-level sensations can be organized in localized brain regions, it is likely that more complex cognitive functions, such as memory, are more accurately described by a dynamic coordination of distributed interconnected areas […]. This sort of approach implies that memory, per se, does not exist but, instead, “information … resides at the level of the large-scale network” (Bressler & Kelso, 2001, p. 33).”

“The processing view […] emphasizes encoding and retrieval processes instead of the system or location in which the memory might be stored. […] Processes, not structures, are what is fundamental. […] The major criticisms of the processing approaches parallel those that have been leveled at the systems view: (a) number of processes or components (instead of number of systems), (b) testability […], and (c) issues with a special population (amnesia rather than life span development). […] The major weakness of the processing view is the major strength of the systems view: patients diagnosed with amnesic syndrome. […] it is difficult to account for data showing a complete abolishment of episodic memory with no apparent effect on semantic memory, procedural memory, or the PRS without appealing to a separate memory store. […] We suggest that in the absence of a compelling reason to prefer the systems view over the processing view (or vice versa), it would be fruitful to consider memory from a functional perspective. We do not know how many memory systems there are or how to define what a memory system is. We do not know how many processes (or components of processing) there are or how to distinguish them. We do acknowledge that short-term memory and long-term memory seem to differ in some ways, as do episodic memory and semantic memory, but are they really fundamentally different? Both the systems approach and, to a lesser extent, the proceduralist approach emphasize differences. Our approach emphasizes similarities. We suggest that a search for general principles of memory, based on fundamental empirical regularities, can act as a spur to theory development and a reexamination of systems versus process theories of memory.”

Our first principle states that all memory is cue driven; without a cue, there can be no memory […]. By cue we mean a specific prompt or query, such as “Did you see this word on the previous list?” […] cues can also be nonverbal, such as odors […], emotions […], nonverbal sounds […], and images […], to name only a few. Although in many situations the person is fully aware that the cue is part of a memory test, this need not be the case. […] Computer simulation models of memory acknowledge the importance of cues by building them into the system; indeed, computer simulation models will not work unless there is a cue. In general, some input is provided to these models, and then a response is provided. The so-called global models of memory, SAM, TODAM, and MINERVA2, are all cue driven. […] it is hard to conceive of a computer model of memory that is not cue dependent, simply because the computer requires something to start the retrieval process. […] There is near unanimity in the view that memory is cue driven. The one area in which this view is contested concerns a particular form of memory that is characterized by highly restrictive capacity limitations.”

The most commonly cited principle of memory, according to [our] literature search […], is the encoding specificity principle […] Our version of this is called the encoding-retrieval principle [and] states that memory depends on the relation between the conditions at encoding and the conditions at retrieval. […] An appreciation for the importance of the encoding-retrieval interaction came about as the result of studies that examined the potency of various cues to elicit items from memory. A strong cue is a word that elicits a particular target word most of the time. For example, when most people hear the word bloom, the first word that pops into their head is often flower. A weak cue is a word that only rarely elicits a particular target. […] A reasonable prediction seems to be that strong cues should be better than weak cues for eliciting the correct item. However, this inference is not entirely correct because it fails to take into account the relationship between the encoding and retrieval conditions. […] the effectiveness of even a long-standing strong cue depends crucially on the processes that occurred at study and the cues available at test. This basic idea became the foundation of the transfer-appropriate processing framework. […] Taken literally, all that transfer-appropriate processing requires is that the processing done at encoding be appropriate given the processing that will be required at test; it permits processing that is identical and permits processing that is similar. It also, however, permits processing that is completely different as long as it is appropriate. […] many proponents of this view act as if the name were “transfer similar processing” and express the idea as requiring a “match” or “overlap” between study and test. […] However, just because increasing the match sometimes leads to enhanced retention does not mean that it is the match that is the critical variable. […] one can easily set up situations in which the degree of match is improved and memory retention is worse, or the degree of match is decreased and memory is better, or the degree of match is changed (either increased or decreased) and it has no effect on retention. Match, then, is simply not the critical variable in determining memory performance. […] The retrieval conditions include other possible responses, and these other items can affect performance. The most accurate description, then, is that it is the relation between encoding and retrieval that matters, not the degree of match or similarity. […] As Tulving (1983, p. 239) noted, the dynamic relation between encoding and retrieval conditions prohibits any statements that take the following forms:
1. “Items (events) of class X are easier to remember than items (events) of class Y.”
2. “Encoding operations of class X are more effective than encoding operations of class Y.”
3. “Retrieval cues of class X are more effective than retrieval cues of class Y.”
Absolute statements that do not specify both the encoding and the retrieval conditions are meaningless because an experimenter can easily
change some aspect of the encoding or retrieval conditions and greatly change the memory performance.”

In most areas of memory research, forgetting is seen as due to retrieval failure, often ascribed to some form of interference. There are, however, two areas of memory research that propose that forgetting is due to an intrinsic property of the memory trace, namely, decay. […] The two most common accounts view decay as either a mathematical convenience in a model, in which a parameter t is associated with time and leads to worse performance, or as some loss of information, in which it is unclear exactly what aspect of memory is decaying and what parts remain. In principle, a decay theory of memory could be proposed that is specific and testable, such as a process analogous to radioactive decay, in which it is understood precisely what is lost and what remains. Thus far, no such decay theory exists. Decay is posited as the forgetting mechanism in only two areas of memory research, sensory memory and short-term/working memory. […] One reason that time-based forgetting, such as decay, is often invoked is the common belief that short-term/working memory is immune to interference, especially proactive interference […]. This is simply not so. […] interference effects are readily observed in the short term. […] Decay predicts the same decrease for the same duration of distractor activity. Interference predicts differential effects depending on the presence or absence of interfering items. Numerous studies support the interference predictions and disconfirm predictions made on the basis of a decay view […] You might be tempted to say, yes, well, there are occasions in which the passage of time is either uncorrelated with or even negatively correlated with memory performance, but on average, you do worse with longer retention intervals. However, this confirms that the putative principle — the memorability of an event declines as the length of the storage interval increases — is not correct. […] One can make statements about the effects of absolute time, but only to the extent that one specifies both the conditions at encoding and those at retrieval. […] It is trivially easy to construct an experiment in which memory for an item does not change or even gets better the longer the retention interval. Here, we provide only eight examples, although there are numerous other examples; a more complete review and discussion are offered by Capaldi and Neath (1995) and Bjork (2001).”

September 22, 2018 Posted by | Books, Psychology | Leave a comment

Personal Relationships… (III)

Some more observations from the book below:

Early research on team processes […] noted that for teams to be effective members must minimize “process losses” and maximize “process gains” — that is, identify ways the team can collectively perform at a level that exceeds the average potential of individual members. To do so, teams need to minimize interpersonal disruptions and maximize interpersonal facilitation among its members […] the prevailing view — backed by empirical findings […] — is that positive social exchanges lead to positive outcomes in teams, whereas negative social exchanges lead to negative outcomes in teams. However, this view may be challenged, in that positive exchanges can sometime lead to negative outcomes, whereas negative exchanges may sometime lead to positive outcomes. For example, research on groupthink (Janis, 1972) suggests that highly cohesive groups can make suboptimal decisions. That is, cohesion […] can lead to suboptimal group performance. As another example, under certain circumstances, negative behavior (e.g., verbal attacks or sabotage directed at another member) by one person in the team could lead to a series of positive exchanges in the team. Such subsequent positive exchanges may involve stronger bonding among other members in support of the targeted member, enforcement of more positive and cordial behavioral norms among members, or resolution of possible conflict between members that might have led to this particular negative exchange.”

“[T]here is […] clear merit in considering social exchanges in teams from a social network perspective. Doing so requires the integration of dyadic-level processes with team-level processes. Specifically, to capture the extent to which certain forms of social exchange networks in teams are formed (e.g., friendship, instrumental, or rather adversary ties), researchers must first consider the dyadic exchanges or ties between all members in the team. Doing so can help researchers identify the extent to which certain forms of ties or other social exchanges are dense in the team […] An important question […] is whether the level of social exchange density in the team might moderate the effects of social exchanges, much like social exchanges strength might strengthen the effects of social exchanges […]. For example, might teams with denser social support networks be able to better handle negative social exchanges in the team when such exchanges emerge? […] the effects of differences in centrality and subgroupings or fault lines may vary, depending on certain factors. Specifically, being more central within the team’s network of social exchange may mean that the more central member receives more support from more members, or, rather, that the more central member is engaged in more negative social exchanges with more members. Likewise, subgroupings or fault lines in the team may lead to negative consequences when they are associated with lack of critical communication among members but not when they reflect the correct form of communication network […] social exchange constructs are likely to exert stronger influences on individual team members when exchanges are more highly shared (and reflected in more dense networks). By the same token, individuals are more likely to react to social exchanges in their team when exchanges are directed at them from more team members”.

“[C]ustomer relationship management (CRM) has garnered growing interest from both research and practice communities in marketing. The purpose of CRM is “to efficiently and effectively increase the acquisition and retention of profitable customers by selectively initiating, building and maintaining appropriate relationships with them” […] Research has shown that successfully implemented CRM programs result in positive outcomes. In a recent meta-analysis, Palmatier, Dant, Grewal, and Evans (2006) found that investments in relationship marketing have a large, direct effect on seller objective performance. In addition, there has been ample research demonstrating that the effects of relationship marketing on outcomes are mediated by relational constructs that include trust […] and commitment […]. Combining these individual predictors by examining the effects of the global construct of relationship quality is also predictive of positive firm performance […] Meta-analytic findings suggest that CRM is more effective when relationships are built with an individual person rather than a selling firm […] Gutek (1995) proposed a typology of service delivery relationships with customers: encounters, pseudo-relationships, and relationships. […] service encounters usually consist of a solitary interaction between a customer and a service employee, with the expectation that they will not interact in the future. […] in a service encounter, customers do not identify with either the individual service employee with whom they interact or with the service organization. […] An alternate to the service encounter relationship is the pseudorelationship, which arises when a customer interacts with different individual service employees but usually (if not always) from the same service organization […] in pseudo-relationships, the customer identifies with the service of a particular service organization, not with an individual service employee. Finally, personal service relationships emerge when customers have repeated interactions with the same individual service provider […] We argue that the nature of these different types of service relationships […] will influence the types and levels of resources exchanged between the customer and the employee during the service interaction, which may further affect customer and employee outcomes from the service interaction.”

“According to social exchange theory, individuals form relationships and engage in social interactions as a means of obtaining needed resources […]. Within a social exchange relationship, individuals may exchange a variety of resources, both tangible and intangible. In the study of exchange relationships, the content of the exchange, or what resources are being exchanged, is often used as an indicator of the quality of the relationship. On the one hand, the greater the quality of resources exchanged, the better the quality of the relationship; on the other hand, the better the relationship, the more likely these resources are exchanged. Therefore, it is important to understand the specific resources exchanged between the service provider and the customer […] Ferris and colleagues (2009) proposed that several elements of a relationship develop because of social exchange: trust, respect, affect, and support. In an interaction between a service provider and a customer, most of the resources that are exchanged are non-economic in nature […]. Examples include smiling, making eye contact, and speaking in a rhythmic (non-monotone) vocal tone […]. Through these gestures, the service provider and the customer may demonstrate a positive affect toward each other. In addition, greeting courteously, listening attentively to customers, and providing assistance to address customer needs may show the service provider’s respect and support to the customer; likewise, providing necessary information, clarifying their needs and expectations, cooperating with the service provider by following proper instructions, and showing gratitude to the service provider may indicate customers’ respect and support to the service provider. Further, through placing confidence in the fairness and honesty of the customer and accuracy of the information the customer provides, the service provider offers the customer his or her trust; similarly, through placing confidence in the expertise and good intentions of the service provider, the customer offers his or her trust in the service provider’s competence and integrity. Some of the resources exchanged, particularly special treatment, between a service provider and a customer are of both economic and social value. For example, the customer may receive special discounts or priority service, which not only offers the customer economic benefits but also shows how much the service provider values and supports the customer. Similarly, a service provider who receives an extra big tip from a customer is not only better off economically but also gains a sense of recognition and esteem. The more these social resources of trust, respect, affect, and support, as well as special treatment, are mutually exchanged in the provider–customer interactions, the higher the quality of the service interaction for both parties involved. […] we argue that the potential for the exchange of resources […] depends on the nature of the service relationship. In other words, the quantity and quality of resources exchanged in discrete service encounters, pseudo-relationships, and personal service relationships are distinct.”

Though customer–employee exchanges can be highly rewarding for both parties, they can also “turn ugly,” […]. In fact, though negative interactions such as rudeness, verbal abuse, or harassment are rare, employees are more likely to report them from customers than from coworkers or supervisors […] customer–employee exchanges are more likely to involve negative treatment than exchanges with organizational insiders. […] Such negative exchanges result in emotional labor and employee burnout […], covert sabotage of services or goods, or, in atypical cases […] direct retaliation and withdrawal […] Employee–customer exchanges are characterized by a strong power differential […] customers can influence the employees’ desired resources, have more choice over whether to continue the relationship, and can act in negative ways with few consequences (Yagil, 2008) […] One common way to conceptualize the impact of negative customer–employee interactions is Hirschman’s (1970) Exit-Voice-loyalty model. Management can learn of customers’ dissatisfaction by their reduced loyalty, voice, or exit. […] Customers rarely, if ever, see themselves as the source of the problem; in contrast, employees are highly likely to see customers as the reason for a negative exchange […] when employees feel customers’ allocation of resources (e.g., tips, purchases) are not commensurate with the time or energy expended (i.e., distributive injustice) or interpersonal treatment of employees is unjustified or violates norms (i.e., interactional injustice), they feel anger and anxiety […] Given these strong emotional responses, emotional deviance is a possible outcome in the service exchange. Emotional deviance is when employees violate display rules by expressing their negative feelings […] To avoid emotional deviance, service providers engage in emotion regulation […]. In lab and field settings, perceived customer mistreatment is linked to “emotional labor,” specifically regulating emotions by faking or suppressing emotions […] Customer mistreatment — incivility as well as verbal abuse — is well linked to employee burnout, and this effect exists beyond other job stressors (e.g., time pressure, constraints) and beyond mistreatment from supervisors and coworkers”.

Though a customer may complain or yell at an employee in hopes of improving service, most evidence suggests the opposite occurs. First, service providers tend to withdraw from negative or deviant customers (e.g., avoiding eye contact or going to the back room[)] […] Engaging in withdrawal or other counterproductive work behaviors (CWBs) in response to mistreatment can actually reduce burnout […], but the behavior is likely to create another dissatisfied customer or two in the meantime. Second, mistreatment can also result in the employees reduced task performance in the service exchange. Stressful work events redirect attention toward sense making, even when mistreatment is fairly ambiguous or mild […] and thus reduce cognitive performance […]. Regulating those negative emotions also requires attentional resources, and both surface and deep acting reduce memory recall compared with expressing felt emotions […] Moreover, the more that service providers feel exhausted and burned out, the less positive their interpersonal performance […] Finally, perceived incivility or aggressive treatment from customers, and the resulting job dissatisfaction, is a key predictor of intentional customer-directed deviant behavior or service sabotage […] Dissatisfied employees engage in less extra-effort behavior than satisfied employees […]. More insidious, they may engage in intentionally deviant performance that is likely to be covert […] and thus difficult to detect and manage […] Examples of service sabotage include intentionally giving the customer faulty or damaged goods, slowing down service pace, or making “mistakes” in the service transaction, all of which are then linked to lower service performance from the customers’ perspective […]. This creates a feedback loop from employee behaviors to customer perceptions […] Typical human resource practices can help service management […], and practices such as good selection and providing training should reduce the likelihood of service failures and the resulting negative reactions from customers […]. Support from colleagues can help buffer the reactions to customer-instigated mistreatment. Individual perceptions of social support moderate the strain from emotional labor […], and formal interventions increasing individual or unit-level social support reduce strain from emotionally demanding interactions with the public (Le Blanc, Hox, Schaufeli, & Taris, 2007).”

August 19, 2018 Posted by | Books, Psychology | Leave a comment

Personal Relationships… (II)

Some more observations from the book below:

Coworker support, or the processes by which coworkers provide assistance with tasks, information, or empathy, has long been considered an important construct in the stress and strain literature […] Social support fits the conservation of resources theory definition of a resource, and it is commonly viewed in that light […]. Support from coworkers helps employees meet the demands of their job, thus making strain less likely […]. In a sense, social support is the currency upon which social exchanges are based. […] The personality of coworkers can play an important role in the development of positive coworker relationships. For example, there is ample evidence that suggests that those higher in conscientiousness and agreeableness are more likely to help coworkers […] Further, similarity in personality between coworkers (e.g., coworkers who are similar in their conscientiousness) draws coworkers together into closer relationships […] cross-sex relationships appear to be managed in a different manner than same-sex relationships. […] members of cross-sex friendships fear the misinterpretation of their relationship by those outside the relationship as a sexual relationship rather than platonic […] a key goal of partners in a cross-sex workplace friendship becomes convincing “third parties that the friendship is authentic.” As a result, cross-sex workplace friends will intentionally limit the intimacy of their communication or limit their non-work-related communication to situations perceived to demonstrate a nonsexual relationship, such as socializing with a cross-sex friend only in the presence of his or her spouse […] demographic dissimilarity in age and race can reduce the likelihood of positive coworker relationships. Chattopadhyay (1999) found that greater dissimilarity among group members on age and race were associated with less collegial relationships among coworkers, which was subsequently associated with less altruistic behavior […] Sias and Cahill (1998) found that a variety of situational characteristics, both inside and outside the workplace setting, helps to predict the development of workplace friendship. For example, they found that factors outside the workplace, such as shared outside interests (e.g., similar hobbies), life events (e.g., having a child), and the simple passing of time can lead to a greater likelihood of a friendship developing. Moreover, internal workplace characteristics, including working together on tasks, physical proximity within the office, a common problem or enemy, and significant amounts of “downtime” that allow for greater socialization, also support friendship development in the workplace (see also Fine, 1986).”

“To build knowledge, employees need to be willing to learn and try new things. Positive relationships are associated with a higher willingness to engage in learning and experimentation […] and, importantly, sharing of that new knowledge to benefit others […] Knowledge sharing is dependent on high-quality communication between relational partners […] Positive relationships are characterized by less defensive communication when relational partners provide feedback (e.g., a suggestion for a better way to accomplish a task; Roberts, 2007). In a coworker context, this would involve accepting help from coworkers without putting up barriers to that help (e.g., nonverbal cues that the help is not appreciated or welcome). […] A recent meta-analysis by Chiaburu and Harrison (2008) found that coworker support was associated with higher performance and higher organizational citizenship behavior (both directed at individuals and directed at the organization broadly). These relationships held whether performance was self- or supervisor related […] Chiaburu and Harrison (2008) also found that coworker support was associated with higher satisfaction and organizational commitment […] Positive coworker exchanges are also associated with lower levels of employee withdrawal, including absenteeism, intention to turnover, and actual turnover […]. To some extent, these relationships may result from norms within the workplace, as coworkers help to set standards for behavior and not “being there” for other coworkers, particularly in situations where the work is highly interdependent, may be considered a significant violation of social norms within a positive working environment […] Perhaps not surprisingly, given the proximity and the amount of time spent with coworkers, workplace friendships will occasionally develop into romances and, potentially, marriages. While still small, the literature on married coworkers suggests that they experience a number of benefits, including lower emotional exhaustion […] and more effective coping strategies […] Married coworkers are an interesting population to examine, largely because their work and family roles are so highly integrated […]. As a result, both resources and demands are more likely to spill over between the work and family role for married coworkers […] Janning and Neely (2006) found that married coworkers were more likely to talk about work-related issues while at home than married couples that had no work-related link.”

Negative exchanges [between coworkers] are characterized by behaviors that are generally undesirable, disrespectful, and harmful to the focal employee or employees. Scholars have found that these negative exchanges influence the same outcomes as positive, supporting exchanges, but in opposite directions. For instance, in their recent meta-analysis of 161 independent studies, Chiaburu and Harrison (2008) found that antagonistic coworker exchanges are negatively related to job satisfaction, organizational commitment, and task performance and positively related to absenteeism, intent to quit, turnover, and counterproductive work behaviors. Unfortunately, despite the recent popularity of the negative exchange research, this literature still lacks construct clarity and definitional precision. […] Because these behaviors have generally referred to acts that impact both coworkers and the organization as a whole, much of this work fails to distinguish social interactions targeting specific individuals within the organization from the nonsocial behaviors explicitly targeting the overall organization. This is unfortunate given that coworker-focused actions and organization-focused actions represent unique dimensions of organizational behavior […] negative exchanges are likely to be preceded by certain antecedents. […] Antecedents may stem from characteristics of the enactor, of the target, or of the context in which the behaviors occur. For example, to the extent that enactors are low on socially relevant personality traits such as agreeableness, emotional stability, or extraversion […], they may be more prone to initiate a negative exchange. Likewise, an enactor who is a high Machiavellian may initiate a negative exchange with the goal of gaining power or establishing control over the target. Antagonistic behaviors may also occur as reciprocation for a previous attack (real or imagined) or as a proactive deterrent against a potential future negative behavior from the target. Similarly, enactors may initiate antagonism based on their perceptions of a coworker’s behavioral characteristics such as suboptimal productivity or weak work ethic. […] The reward system can also play a role as an antecedent condition for antagonism. When coworkers are highly interdependent and receive rewards based on the performance of the group as opposed to each individual, the incidence of antagonism may increase when there is substantial variance in performance among coworkers.”

“[E]mpirical evidence suggests that some people have certain traits that make them more vulnerable to coworker attacks. For example, employees with low self-esteem, low emotional stability, high introversion, or high submissiveness are more inclined to be the recipients of negative coworker behaviors […]. Furthermore, research also shows that people who engage in negative behaviors are likely to also become the targets of these behaviors […] Two of the most commonly studied workplace attitudes are employee job satisfaction […] and affective organizational commitment […] Chiaburu and Harrison (2008) linked general coworker antagonism with both attitudes. Further, the specific behaviors of bullying and incivility have also been found to adversely affect both job satisfaction and organizational commitment […]. A variety of behavioral outcomes have also been identified as outcomes of coworker antagonism. Withdrawal behaviors such as absenteeism, intention to quit, turnover, effort reduction […] are typical responses […] those who have been targeted by aggression are more likely to engage in aggression. […] Feelings of anger, fear, and negative mood have also been shown to mediate the effects of interpersonal mistreatment on behaviors such as withdrawal and turnover […] [T]he combination of enactor and target characteristics is likely to play an antecedent role to these exchanges. For instance, research in the diversity area suggests that people tend to be more comfortable around those with whom they are similar and less comfortable around people with whom they are dissimilar […] there may be a greater incidence of coworker antagonism in more highly diverse settings than in settings characterized by less diversity. […] research has suggested that antagonistic behaviors, while harmful to the target or focal employee, may actually be beneficial to the enactor of the exchange. […] Krischer, Penney, and Hunter (2010) recently found that certain types of counterproductive work behaviors targeting the organization may actually provide employees with a coping mechanism that ultimately reduces their level of emotional exhaustion.”

CWB [counterproductive work behaviors] toward others is composed of volitional acts that harm people at work; in our discussion this would refer to coworkers. […] person-oriented organizational citizenship behaviors (OCB; Organ, 1988) consist of behaviors that help others in the workplace. This might include sharing job knowledge with a coworker or helping a coworker who had too much to do […] Social support is often divided into the two forms of emotional support that helps people deal with negative feelings in response to demanding situations versus instrumental support that provides tangible aid in directly dealing with work demands […] one might expect that instrumental social support would be more strongly related to positive exchanges and positive relationships. […] coworker social support […] has [however] been shown to relate to strains (burnout) in a meta-analysis (Halbesleben, 2006). […] Griffin et al. suggested that low levels of the Five Factor Model […] dimensions of agreeableness, emotional stability, and extraversion might all contribute to negative behaviors. Support can be found for the connection between two of these personality characteristics and CWB. […] Berry, Ones, and Sackett (2007) showed in their meta-analysis that person-focused CWB (they used the term deviance) had significant mean correlations of –.20 with emotional stability and –.36 with agreeableness […] there was a significant relationship with conscientiousness (r = –.19). Thus, agreeable, conscientious, and emotionally stable individuals are less likely to engage in CWB directed toward people and would be expected to have fewer negative exchanges and better relationships with coworkers. […] Halbesleben […] suggests that individuals high on the Five Factor Model […] dimensions of agreeableness and conscientiousness would have more positive exchanges because they are more likely to engage in helping behavior. […] a meta-analysis has shown that both of these personality variables relate to the altruism factor of OCB in the direction expected […]. Specifically, the mean correlations of OCB were .13 for agreeableness and .22 for conscientiousness. Thus, individuals high on these two personality dimensions should have more positive coworker exchanges.”

There is a long history of research in social psychology supporting the idea that people tend to be attracted to, bond, and form friendships with others they believe to be similar […], and this is true whether the similarity is rooted in demographics that are fairly easy to observe […] or in attitudes, beliefs, and values that are more difficult to observe […] Social network scholars refer to this phenomenon as homophily, or the notion that “similarity breeds connection” […] although evidence of homophily has been found to exist in many different types of relationships, including marriage, frequency of communication, and career support, it is perhaps most evident in the formation of friendships […] We extend this line of research and propose that, in a team context that provides opportunities for tie formation, greater levels of perceived similarity among team members will be positively associated with the number of friendship ties among team members. […] A chief function of friendship ties is to provide an outlet for individuals to disclose and manage emotions. […] friendship is understood as a form of support that is not related to work tasks directly; rather, it is a “backstage resource” that allows employees to cope with demands by creating distance between them and their work roles […]. Thus, we propose that friendship network ties will be especially important in providing the type of coping resources that should foster team member well-being. Unfortunately, however, friendship network ties negatively impact team members’ ability to focus on their work tasks, and, in turn, this detracts from taskwork. […] When friends discuss nonwork topics, these individuals will be distracted from work tasks and will be exposed to off-task information exchanged in informal relationships that is irrelevant for performing one’s job. Additionally, distractions can hinder individuals’ ability to become completely engaged in their work (Jett & George).”

Although teams are designed to meet important goals for both companies and their employees, not all team members work together well.
Teams are frequently “cruel to their members” […] through a variety of negative team member exchanges (NTMEs) including mobbing, bullying, incivility, social undermining, and sexual harassment. […] Team membership offers identity […], stability, and security — positive feelings that often elevate work teams to powerful positions in employees’ lives […], so that members are acutely aware of how their teammates treat them. […] NTMEs may evoke stronger emotional, attitudinal, and behavioral consequences than negative encounters with nonteam members. In brief, team members who are targeted for NTMEs are likely to experience profound threats to personal identity, security, and stability […] when a team member targets another for negative interpersonal treatment, the target is likely to perceive that the entire group is behind the attack rather than the specific instigator alone […] Studies have found that NTMEs […] are associated with poor psychological outcomes such as depression; undesirable work attitudes such as low affective commitment, job dissatisfaction, and low organization-based self-esteem; and counterproductive behaviors such as deviance, job withdrawal, and unethical behavior […] Some initial evidence has also indicated that perceptions of rejection mediate the effects of NTMEs on target outcomes […] Perceptions of the comparative treatment of other team members are an important factor in reactions to NTMEs […]. When targets perceive they are “singled out,” NTMEs will cause more pronounced effects […] A significant body of literature has suggested that individuals guide their own behaviors through environmental social cues that they glean from observing the norms and values of others. Thus, the negative effects of NTMEs may extend beyond the specific targets; NTMEs can spread contagiously to other team members […]. The more interdependent the social actors in the team setting, the stronger and more salient will be the social cues […] [There] is evidence that as team members see others enacting NTMEs, their inhibitions against such behaviors are lowered.”

August 13, 2018 Posted by | Books, Psychology | Leave a comment

Personal Relationships… (I)

“Across subdisciplines of psychology, research finds that positive, fulfilling, and satisfying relationships contribute to life satisfaction, psychological health, and physical well-being whereas negative, destructive, and unsatisfying relationships have a whole host of detrimental psychological and physical effects. This is because humans possess a fundamental “need to belong” […], characterized by the motivation to form and maintain lasting, positive, and significant relationships with others. The need to belong is fueled by frequent and pleasant relational exchanges with others and thwarted when one feels excluded, rejected, and hurt by others. […] This book uses research and theory on the need to belong as a foundation to explore how five different types of relationships influence employee attitudes, behaviors, and well-being. They include relationships with supervisors, coworkers, team members, customers, and individuals in one’s nonwork life. […] This book is written for a scientist–practitioner audience and targeted to both researchers and human resource management professionals. The contributors highlight both theoretical and practical implications in their respective chapters, with a common emphasis on how to create and sustain an organizational climate that values positive relationships and deters negative interpersonal experiences. Due to the breadth of topics covered in this edited volume, the book is also appropriate for advanced specialty undergraduate or graduate courses on I/O psychology, human resource management, and organizational behavior.”

The kind of stuff covered in books like this one relates closely to social stuff I lack knowledge about and/or is just not very good at handling. I don’t think too highly of this book’s coverage so far, but that’s at least partly due to the kinds of topics covered – it is what it is.

Below I have added some quotes from the first few chapters of the book.

“Work relationships are important to study in that they can exert a strong influence on employees’ attitudes and behaviors […].The research evidence is robust and consistent; positive relational interactions at work are associated with more favorable work attitudes, less work-related strain, and greater well-being (for reviews see Dutton & Ragins, 2007; Grant & Parker, 2009). On the other side of the social ledger, negative relational interactions at work induce greater strain reactions, create negative affective reactions, and reduce well-being […]. The relationship science literature is clear, social connection has a causal effect on individual health and well-being”.

“[One] way to view relationships is to consider the different dimensions by which relationships vary. An array of dimensions that underlie relationships has been proposed […] Affective tone reflects the degree of positive and negative feelings and emotions within the relationship […] Relationships and groups marked by greater positive affective tone convey more enthusiasm, excitement, and elation for each other, while relationships consisting of more negative affective tone express more fear, distress, and scorn. […] Emotional carrying capacity refers to the extent that the relationship can handle the expression of a full range of negative and position emotions as well as the quantity of emotion expressed […]. High-quality relationships have the ability to withstand the expression of more emotion and a greater variety of emotion […] Interdependence involves ongoing chains of mutual influence between two people […]. Degree of relationship interdependency is reflected through frequency, strength, and span of influence. […] A high degree of interdependence is commonly thought to be one of the hallmarks of a close relationship […] Intimacy is composed of two fundamental components: self-disclosure and partner responsiveness […]. Responsiveness involves the extent that relationship partners understand, validate, and care for one another. Disclosure refers to verbal communications of personally relevant information, thoughts, and feelings. Divulging more emotionally charged information of a highly personal nature is associated with greater intimacy […]. Disclosure tends to proceed from the superficial to the more intimate and expands in breadth over time […] Power refers to the degree that dominance shapes the relationship […] relationships marked by a power differential are more likely to involve unidirectional interactions. Equivalent power tends to facilitate bidirectional exchanges […] Tensility is the extent that the relationship can bend and endure strain in the face of challenges and setbacks […]. Relationship tensility contributes to psychological safety within the relationship. […] Trust is the belief that relationship partners can be depended upon and care about their partner’s needs and interests […] Relationships that include a great deal of trust are stronger and more resilient. A breach of trust can be one of the most difficult relationships challenges to overcome (Pratt & dirks, 2007).”

“Relationships are separate entities from the individuals involved in the relationships. The relationship unit (typically a dyad) operates at a different level of analysis from the individual unit. […] For those who conduct research on groups or organizations, it is clear that operations at a group level […] operate at a different level than individual psychology, and it is not merely the aggregate of the individuals involved in the relationship. […] operations at one level (e.g., relationships) can influence behavior at the other level (e.g., individual). […] relationships are best thought of as existing at their own level of analysis, but one that interacts with other levels of analysis, such as individual and group or cultural levels. Relationships cannot be reduced to the actions of the individuals in them or the social structures where they reside but instead interact with the individual and group processes in interesting ways to produce behaviors. […] it is challenging to assess causality via experimental procedures when studying relationships. […] Experimental procedures are crucial for making inferences of causation but are particularly difficult in the case of relationships because it is tough to manipulate many important relationships (e.g., love, marriage, sibling relationships). […] relationships are difficult to observe at the very beginning and at the end, so methods have been developed to facilitate this.”

“[T]he organizational research could […] benefit from the use of theoretical models from the broader relationships literature. […] Interdependence theory is hardly ever seen in organizations. There was some fascinating work in this area a few decades ago, especially in interdependence theory with the investment model […]. This work focused on the precursors of commitment in the workplace and found that, like romantic relationships, the variables of satisfaction, investments, and alternatives played key roles in this process. The result is that when satisfaction and investments are high and alternative opportunities are low, commitment is high. However, it also means that if investments are sufficiently high and alternatives are sufficiently low, then satisfaction can by lowered and commitment will remain high — hence, the investment model is useful for understanding exploitation (Rusbult, Campbell, & Price, 1990).”

“Because they cross formal levels in the organizational hierarchy, supervisory relationships necessarily involve an imbalance in formal power. […] A review by Keltner, Gruenfeld, and Anderson (2003) suggests that power affects how people experience emotions, whether they attend more to rewards or threats, how they process information, and the extent to which they inhibit their behavior around others. The literature clearly suggests that power influences affect, cognition, and behavior in ways that might tend to constrain the formation of positive relationships between individuals with varying degrees of power. […] The power literature is clear in showing that more powerful individuals attend less to their social context, including the people in it, than do less powerful individuals, and the literature suggests that supervisors (compared with subordinates) might tend to place less value on the relationship and be less attuned to their partner’s needs. Yet the formal power accorded to supervisors by the organization — via the supervisory role — is accompanied by the role prescribed responsibility for the performance, motivation, and well-being of subordinates. Thus, the accountability for the formation of a positive supervisory relationship lies more heavily with the supervisor. […] As we examine the qualities of positive supervisory relationships, we make a clear distinction between effective supervisory behaviors and positive supervisory relationships. This is an important distinction […] a large body of leadership research has focused on traits or behaviors of supervisors […] and the affective, motivational, and behavioral responses of employees to those behaviors, with little attention paid to the interactions between the two. There are two practical implications of moving the focus from individuals to relationships: (1) supervisors who use “effective” leadership behaviors may or may not have positive relationships with employees; and (2) supervisors who have a positive relationship with one employee may not have equally positive relationships with other employees, even if they use the same “effective” behaviors.”

There is a large and well-developed stream of research that focuses explicitly on exchanges between supervisors and the employees who report directly to them. Leader–member exchange theory addresses the various types of functional relationships that can be formed between supervisors and subordinates. A core assumption of LMX theory is that supervisors do not have the time or resources to develop equally positive relationships with all subordinates. Thus, to minimize their investment and yield the greatest results for the organization, supervisors would develop close relationships with only a few subordinates […] These few high-quality relationships are marked by high levels of trust, loyalty, and support, whereas the balance of supervisory relationships are contractual in nature and depends on timely rewards allotted by supervisors in direct exchange for desirable behaviors […] There has been considerable confusion and debate in the literature about LMX theory and the construct validity of LMX measures […] Despite shortcomings in LMX research, it is [however] clear that supervisors form relationships of varying quality with subordinates […] Among factors associated with high LMX are the supervisor’s level of agreeableness […] and the employee’s level of extraversion […], feedback seeking […], and (negatively) negative affectivity […]. Those who perceived similarity in terms of family, money, career strategies, goals in life, education […], and gender […] also reported high LMX. […] Employee LMX is strongly related to attitudes, such as job satisfaction […] Supporting the notion that a positive supervisory relationship is good for employees, the LMX literature is replete with studies linking high LMX with thriving and autonomous motivation. […] The premise of the LMX research is that supervisory resources are limited and high-quality relationships are demanding. Thus, supervisor will be most effective when they allocate their resources efficiently and effectively, forming some high-quality and some instrumental relationships. But the empirical research from the lMX literature provides little (if any) evidence that supervisors who differentiate are more effective”.

The norm of negative reciprocity obligates targets of harm to reciprocate with actions that produce roughly equivalent levels of harm — if someone is unkind to me, I should be approximately as unkind to him or her. […] But the trajectory of negative reciprocity differs in important ways when there are power asymmetries between the parties involved in a negative exchange relationship. The workplace revenge literature suggests that low-power targets of hostility generally withhold retaliatory acts. […] In exchange relationships where one actor is more dependent on the other for valued resources, the dependent/less powerful actor’s ability to satisfy his or her self-interests will be constrained […]. Subordinate targets of supervisor hostility should therefore be less able (than supervisor targets of subordinate hostility) to return the injuries they sustain […] To the extent subordinate contributions to negative exchanges are likely to trigger disciplinary responses by the supervisor target (e.g., reprimands, demotion, transfer, or termination), we can expect that subordinates will withhold negative reciprocity.”

“In the last dozen years, much has been learned about the contributions that supervisors make to negative exchanges with subordinates. […] Several dozen studies have examined the consequences of supervisor contributions to negative exchanges. This work suggests that exposure to supervisor hostility is negatively related to subordinates’ satisfaction with the job […], affective commitment to the organization […], and both in-role and extra-role performance contributions […] and is positively related to subordinates’ psychological distress […], problem drinking […], and unit-level counterproductive work behavior […]. Exposure to supervisor hostility has also been linked with family undermining behavior — employees who are the targets of abusive supervision are more likely to be hostile toward their own family members […] Most studies of supervisor hostility have accounted for moderating factors — individual and situational factors that buffer or exacerbate the effects of exposure. For example, Tepper (2000) found that the injurious effects of supervisor hostility on employees’ attitudes and strain reactions were stronger when subordinates have less job mobility and therefore feel trapped in jobs that deplete their coping resources. […] Duffy, Ganster, Shaw, Johnson, and Pagon (2006) found that the effects of supervisor hostility are more pronounced when subordinates are singled out rather than targeted along with multiple coworkers. […] work suggests that the effects of abusive supervision on subordinates’ strain reactions are weaker when subordinates employ impression management strategies […] and more confrontational (as opposed to avoidant) communication tactics […]. It is clear that not all subordinates react the same way to supervisor hostility and characteristics of subordinates and the context influence the trajectory of subordinates’ responses. […] In a meta-analytic examination of studies of the correlates of supervisor-directed hostility, Herschovis et al. (2007) found support for the idea that subordinates who believe that they have been the target of mistreatment are more likely to lash out at their supervisors. […] perhaps just as interesting as the associations that have been uncovered are several hypothesized associations that have not emerged. Greenberg and Barling (1999) found that supervisor-directed aggression was unrelated to subordinates’ alcohol consumption, history of aggression, and job security. Other work has revealed mixed results for the prediction that subordinate self-esteem will negatively predict supervisor-directed hostility (Inness, Barling, & Turner, 2005). […] Negative exchanges between supervisors and subordinates do not play out in isolation — others observe them and are affected by them. Yet little is known about the affective, cognitive, and behavioral responses of third parties to negative exchanges with supervisors.”

August 8, 2018 Posted by | Books, Psychology | Leave a comment

Prevention of Late-Life Depression (II)

Some more observations from the book:

In contrast to depression in childhood and youth when genetic and developmental vulnerabilities play a significant role in the development of depression, the development of late-life depression is largely attributed to its interactions with acquired factors, especially medical illness [17, 18]. An analysis of the WHO World Health Survey indicated that the prevalence of depression among medical patients ranged from 9.3 to 23.0 %, significantly higher than that in individuals without medical conditions [19]. Wells et al. [20] found in the Epidemiologic Catchment Area Study that the risk of developing lifetime psychiatric disorders among individuals with at least one medical condition was 27.9 % higher than among those without medical conditions. […] Depression and disability mutually reinforce the risk of each other, and adversely affect disease progression and prognosis [21, 25]. […] disability caused by medical conditions serves as a risk factor for depression [26]. When people lose their normal sensory, motor, cognitive, social, or executive functions, especially in a short period of time, they can become very frustrated or depressed. Inability to perform daily tasks as before decreases self-esteem, reduces independence, increases the level of psychological stress, and creates a sense of hopelessness. On the other hand, depression increases the risk for disability. Negative interpretation, attention bias, and learned hopelessness of depressed persons may increase risky health behaviors that exacerbate physical disorders or disability. Meanwhile, depression-related cognitive impairment also affects role performance and leads to functional disability [25]. For example, Egede [27] found in the 1999 National Health Interview Survey that the risk of having functional disability among patients with the comorbidity of diabetes and depression were approximately 2.5–5 times higher than those with either depression or diabetes alone. […]  A leading cause of disability among medical patients is pain and pain-related fears […] Although a large proportion of pain complaints can be attributed to physiological changes from physical disorders, psychological factors (e.g., attention, interpretation, and coping skills) play an important role in perception of pain […] Bair et al. [31] indicated in a literature review that the prevalence of pain was higher among depressed patients than non-depressed patients, and the prevalence of major depression was also higher among pain patients comparing to those without pain complaints.”

Alcohol use has more serious adverse health effects on older adults than other age groups, since aging-related physiological changes (e.g. reduced liver detoxification and renal clearance) affect alcohol metabolism, increase the blood concentration of alcohol, and magnify negative consequences. More importantly, alcohol interacts with a variety of frequently prescribed medications potentially influencing both treatment and adverse effects. […] Due to age-related changes in pharmacokinetics and pharmacodynamics, older adults are a vulnerable population to […] adverse drug effects. […] Adverse drug events are frequently due to failure to adjust dosage or to account for drug–drug interactions in older adults [64]. […] Loneliness […] is considered as an independent risk factor for depression [46, 47], and has been demonstrated to be associated with low physical activity, increased cardiovascular risks, hyperactivity of the hypothalamic-pituitary-adrenal axis, and activation of immune response [for details, see Cacioppo & Patrick’s book on these topics – US] […] Hopelessness is a key concept of major depression [54], and also an independent risk factor of suicidal ideation […] Hopelessness reduces expectations for the future, and negatively affects judgment for making medical and behavioral decisions, including non-adherence to medical regimens or engaging in unhealthy behaviors.”

Co-occurring depression and medical conditions are associated with more functional impairment and mortality than expected from the severity of the medical condition alone. For example, depression accompanying diabetes confers increased functional impairment [27], complications of diabetes [65, 66], and mortality [6771]. Frasure-Smith and colleagues highlighted the prognostic importance of depression among persons who had sustained a myocardial infarction (MI), finding that depression was a significant predictor of mortality at both 6 and 18 months post MI [72, 73]. Subsequent follow-up studies have borne out the increased risk conferred by depression on the mortality of patients with cardiovascular disease [10, 74, 75]. Over the course of a 2-year follow-up interval, depression contributed as much to mortality as did myocardial infarction or diabetes, with the population attributable fraction of mortality due to depression approximately 13 % (similar to the attributable risk associated with heart attack at 11 % and diabetes at 9 %) [76]. […] Although the bidirectional relationship between physical disorders and depression has been well known, there are still relatively few randomized controlled trials on preventing depression among medically ill patients. […] Rates of attrition [in post-stroke depression prevention trials has been observed to be] high […] Stroke, acute coronary syndrome, cancer, and other conditions impose a variety of treatment burdens on patients so that additional interventions without direct or immediate clinical effects may not be acceptable [95]. So even with good participation rates, lack of adherence to the intervention might limit effects.”

Late-life depression (LLD) is a heterogeneous disease, with multiple risk factors, etiologies, and clinical features. It has been recognized for many years that there is a significant relationship between the presence of depression and cerebrovascular disease in older adults [1, 2]. This subtype of LLD was eventually termed “vascular depression.” […] There have been a multitude of studies associating white matter abnormalities with depression in older adults using MRI technology to visualize lesions, or what appear as hyperintensities in the white matter on T2-weighted scans. A systematic review concluded that white matter hyperintensities (WMH) are more common and severe among older adults with depression compared to their non-depressed peers [9]. […] WMHs are associated with older age [13] and cerebrovascular risk factors, including diabetes, heart disease, and hypertension [14–17]. White matter severity and extent of WMH volume has been related to the severity of depression in late life [18, 19]. For example, among 639 older, community-dwelling adults, white matter lesion (WML) severity was found to predict depressive episodes and symptoms over a 3-year period [19]. […] Another way of investigating white matter integrity is with diffusion tensor imaging (DTI), which measures the diffusion of water in tissues and allows for indirect evidence of the microstructure of white matter, most commonly represented as fractional anisotropy (FA) and mean diffusivity (MD). DTI may be more sensitive to white matter pathology than is quantification of WMH […] A number of studies have found lower FA in widespread regions among individuals with LLD relative to controls [34, 36, 37]. […] lower FA has been associated with poorer performance on measures of cognitive functioning among patients with LLD [35, 38–40] and with measures of cerebrovascular risk severity. […] It is important to recognize that FA reflects the organization of fiber tracts, including fiber density, axonal diameter, or myelination in white matter. Thus, lower FA can result from multiple pathophysiological sources [42, 43]. […] Together, the aforementioned studies provide support for the vascular depression hypothesis. They demonstrate that white matter integrity is reduced in patients with LLD relative to controls, is somewhat specific to regions important for cognitive and emotional functioning, and is associated with cognitive functioning and depression severity. […] There is now a wealth of evidence to support the association between vascular pathology and depression in older age. While the etiology of depression in older age is multifactorial, from the epidemiological, neuroimaging, behavioral, and genetic evidence available, we can conclude that vascular depression represents one important subtype of LLD. The mechanisms underlying the relationship between vascular pathology and depression are likely multifactorial, and may include disrupted connections between key neural regions, reduced perfusion of blood to key brain regions integral to affective and cognitive processing, and inflammatory processes.”

Cognitive changes associated with depression have been the focus of research for decades. Results have been inconsistent, likely as a result of methodological differences in how depression is diagnosed and cognitive functioning measured, as well as the effects of potential subtypes and the severity of depression […], though deficits in executive functioning, learning and memory, and attention have been associated with depression in most studies [75]. In older adults, additional confounding factors include the potential presence of primary degenerative disorders, such as Alzheimer’s disease, which can pose a challenge to differential diagnosis in its early stages. […] LLD with cognitive dysfunction has been shown to result in greater disability than depressive symptoms alone [6], and MCI [mild cognitive impairment, US] with co-occurring LLD has been shown to double the risk of developing Alzheimer’s disease (AD) compared to MCI alone [86]. The conversion from MCI to AD also appears to occur earlier in patients with cooccurring depressive symptoms, as demonstrated by Modrego & Ferrandez [86] in their prospective cohort study of 114 outpatients diagnosed with amnestic MCI. […] Given accruing evidence for abnormal functioning of a number of cortical and subcortical networks in geriatric depression, of particular interest is whether these abnormalities are a reflection of the actively depressed state, or whether they may persist following successful resolution of symptoms. To date, studies have investigated this question through either longitudinal investigation of adults with geriatric depression, or comparison of depressed elders who are actively depressed versus those who have achieved symptom remission. Of encouragement, successful treatment has been reliably associated with normalization of some aspects of disrupted network functioning. For example, successful antidepressant treatment is associated with reduction of the elevated cerebral glucose metabolism observed during depressed states (e.g., [71–74]), with greater symptom reduction associated with greater metabolic change […] Taken together, these studies suggest that although a subset of the functional abnormalities observed during the LLD state may resolve with successful treatment, other abnormalities persist and may be tied to damage to the structural connectivity in important affective and cognitive networks. […] studies suggest a chronic decrement in cognitive functioning associated with LLD that is not adequately addressed through improvement of depressive symptoms alone.”

A review of the literature on evidence-based treatments for LLD found that about 50 % of patients improved on antidepressants, but that the number needed to treat (NNT) was quite high (NNT = 8, [139]) and placebo effects were significant [140]. Additionally, no difference was demonstrated in the effectiveness of one antidepressant drug class over another […], and in one-third of patients, depression was resistant to monotherapy [140]. The addition of medications or switching within or between drug classes appears to result in improved treatment response for these patients [140, 141]. A meta-analysis of patient-level variables demonstrated that duration of depressive symptoms and baseline depression severity significantly predicts response to antidepressant treatment in LLD, with chronically depressed older patients with moderate-to-severe symptoms at baseline experiencing more improvement in symptoms than mildly and acutely depressed patients [142]. Pharmacological treatment response appears to range from incomplete to poor in LLD with co-occurring cognitive impairment.”

“[C]ompared to other formulations of prevention, such as primary, secondary, or tertiary — in which interventions are targeted at the level of disease/stage of disease — the IOM conceptual framework involves interventions that are targeted at the level of risk in the population [2]. […] [S]elective prevention studies have an important “numbers” advantage — similar to that of indicated prevention trials: the relatively high incidence of depression among persons with key risk markers enables investigator to test interventions with strong statistical power, even with somewhat modest sample sizes. This fact was illustrated by Schoevers and colleagues [3], in which the authors were able to account for nearly 50 % of total risk of late-life depression with consideration of only a handful of factors. Indeed, research, largely generated by groups in the Netherlands and the USA, has identified that selective prevention may be one of the most efficient approaches to late-life depression prevention, as they have estimated that targeting persons at high risk for depression — based on risk markers such as medical comorbidity, low social support, or physical/functional disability — can yield theoretical numbers needed to treat (NNTs) of approximately 5–7 in primary care settings [4–7]. […] compared to the findings from selective prevention trials targeting older persons with general health/medical problems, […] trials targeting older persons based on sociodemographic risk factors have been more mixed and did not reveal as consistent a pattern of benefits for selective prevention of depression.”

Few of the studies in the existing literature that involve interventions to prevent depression and/or reduce depressive symptoms in older populations have included economic evaluations [13]. The identification of cost-effective interventions to provide to groups at high risk for depression is an important public health goal, as such treatments may avert or reduce a significant amount of the disease burden. […] A study by Katon and colleagues [8] showed that elderly patients with either subsyndromal or major depression had significantly higher medical costs during the previous 6 months than those without depression; total healthcare costs were $1,045 to $1,700 greater, and total outpatient/ambulatory costs ranged from being $763 to $979 more, on average. Depressed patients had greater usage of health resources in every category of care examined, including those that are not mental health-related, such as emergency department visits. No difference in excess costs was found between patients with a DSM-IV depressive disorder and those with depressive symptoms only, however, as mean total costs were 51 % higher in the subthreshold depression group (95 % CI = 1.39–1.66) and 49 % higher in the MDD/dysthymia group (95 % CI = 1.28–1.72) than in the nondepressed group [8]. In a similar study, the usage of various types of health services by primary care patients in the Netherlands was assessed, and average costs were determined to be 1,403 more in depressed individuals versus control patients [21]. Study investigators once again observed that patients with depression had greater utilization of both non-mental and mental healthcare services than controls.”

“In order for routine depression screening in the elderly to be cost-effective […] appropriate follow-up measures must be taken with those who screen positive, including a diagnostic interview and/or referral to a mental health professional [this – the necessity/requirement of proper follow-up following screens in order for screening to be cost-effective – is incidentally a standard result in screening contexts, see also Juth & Munthe’s book – US] [23, 25]. For example, subsequent steps may include initiation of psychotherapy or antidepressant treatment. Thus, one reason that the USPSTF does not recommend screening for depression in settings where proper mental health resources do not exist is that the evidence suggests that outcomes are unlikely to improve without effective follow-up care […]  as per the USPSTF suggestion, Medicare will only cover the screening when the appropriate supports for proper diagnosis and treatment are available […] In order to determine which interventions to prevent and treat depression should be provided to those who screen positive for depressive symptoms and to high-risk populations in general, cost-effectiveness analyses must be completed for a variety of different treatments and preventive measures. […] questions remain regarding whether annual versus other intervals of screening are most cost-effective. With respect to preventive interventions, the evidence to date suggests that these are cost-effective in settings where those at the highest risk are targeted.”

February 19, 2018 Posted by | Books, Cardiology, Diabetes, Health Economics, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”

February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Depression (II)

I have added some more quotes from the last half of the book as well as some more links to relevant topics below.

“The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. […] Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel […] Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. […] Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent. Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.”

“[S]ome respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials […] It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists). […] Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression. […] The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines”.

“[W]hilst some cases of mild–moderate depression can benefit from antidepressants (e.g. chronic mild depression of several years’ duration can often respond to medication), it is repeatedly shown that the only group who consistently benefit from antidepressants are those with severe depression. The problem is that in the real world, most antidepressants are actually prescribed for less severe cases, that is, the group least likely to benefit; which is part of the reason why the argument about whether antidepressants work is not going to go away any time soon.”

“The economic argument for therapy can only be sustained if it is shown that the long-term outcome of depression (fewer relapses and better quality of life) is improved by receiving therapy instead of medication or by receiving both therapy and medication. Despite claims about how therapies such as CBT, behavioural activation, IPT, or family therapy may work, the reality is that many of the elements included in these therapies are the same as elements described in all the other effective therapies (sometimes referred to as empirically supported therapies). The shared elements include forming a positive working alliance with the depressed person, sharing the model and the plan for therapy with the patient from day one, and helping the patient engage in active problem-solving, etc. Given the degree of overlap, it is hard to make a real case for using one empirically supported therapy instead of another. Also, there are few predictors (besides symptom severity and personal preference) that consistently show who will respond to one of these therapies rather than to medication. […] One of the reasons for some scepticism about the value of therapies for treating depression is that it has proved difficult to demonstrate exactly what mediates the benefits of these interventions. […] despite the enthusiasm for mindfulness, there were fewer than twenty high-quality research trials on its use in adults with depression by the end of 2015 and most of these studies had fewer than 100 participants. […] exercise improves the symptoms of depression compared to no treatment at all, but the currently available studies on this topic are less than ideal (with many problems in the design of the study or sample of participants included in the clinical trial). […] Exercise is likely to be a better option for those individuals whose mood improves from participating in the experience, rather than someone who is so depressed that they feel further undermined by the process or feel guilty about ‘not trying hard enough’ when they attend the programme.”

“Research […] indicates that treatment is important and a study from the USA in 2005 showed that those who took the prescribed antidepressant medications had a 20 per cent lower rate of absenteeism than those who did not receive treatment for their depression. Absence from work is only one half of the depression–employment equation. In recent times, a new concept ‘presenteeism’ has been introduced to try to describe the problem of individuals who are attending their place of work but have reduced efficiency (usually because their functioning is impaired by illness). As might be imagined, presenteeism is a common issue in depression and a study in the USA in 2007 estimated that a depressed person will lose 5–8 hours of productive work every week because the symptoms they experience directly or indirectly impair their ability to complete work-related tasks. For example, depression was associated with reduced productivity (due to lack of concentration, slowed physical and mental functioning, loss of confidence), and impaired social functioning”.

“Health economists do not usually restrict their estimates of the cost of a disorder simply to the funds needed for treatment (i.e. the direct health and social care costs). A comprehensive economic assessment also takes into account the indirect costs. In depression these will include costs associated with employment issues (e.g. absenteeism and presenteeism; sickness benefits), costs incurred by the patient’s family or significant others (e.g. associated with time away from work to care for someone), and costs arising from premature death such as depression-related suicides (so-called mortality costs). […] Studies from around the world consistently demonstrate that the direct health care costs of depression are dwarfed by the indirect costs. […] Interestingly, absenteeism is usually estimated to be about one-quarter of the costs of presenteeism.”

Jakob Klaesi. António Egas Moniz. Walter Jackson Freeman II.
Electroconvulsive therapy.
Psychosurgery.
Vagal nerve stimulation.
Chlorpromazine. Imipramine. Tricyclic antidepressant. MAOIs. SSRIs. John CadeMogens Schou. Lithium carbonate.
Psychoanalysis. CBT.
Thomas Szasz.
Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration (Kirsch et al.).
Chronobiology. Chronobiotics. Melatonin.
Eric Kandel. BDNF.
The global burden of disease (Murray & Lopez) (the author discusses some of the data included in that publication).

January 8, 2018 Posted by | Books, Health Economics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Depression (I)

Below I have added some quotes and links related to the first half of this book.

Quotes:

“One of the problems encountered in any discussion of depression is that the word is used to mean different things by different people. For many members of the public, the term depression is used to describe normal sadness. In clinical practice, the term depression can be used to describe negative mood states, which are symptoms that can occur in a range of illnesses (e.g. individuals with psychosis may also report depressed mood). However, the term depression can also be used to refer to a diagnosis. When employed in this way it is meant to indicate that a cluster of symptoms have all occurred together, with the most common changes being in mood, thoughts, feelings, and behaviours. Theoretically, all these symptoms need to be present to make a diagnosis of depressive disorder.”

“The absence of any laboratory tests in psychiatry means that the diagnosis of depression relies on clinical judgement and the recognition of patterns of symptoms. There are two main problems with this. First, the diagnosis represents an attempt to impose a ‘present/absent’ or ‘yes/no’ classification on a problem that, in reality, is dimensional and varies in duration and severity. Also, many symptoms are likely to show some degree of overlap with pre-existing personality traits. Taken together, this means there is an ongoing concern about the point at which depression or depressive symptoms should be regarded as a mental disorder, that is, where to situate the dividing line on a continuum from health to normal sadness to illness. Second, for many years, there was a lack of consistent agreement on what combination of symptoms and impaired functioning would benefit from clinical intervention. This lack of consensus on the threshold for treatment, or for deciding which treatment to use, is a major source of problems to this day. […] A careful inspection of the criteria for identifying a depressive disorder demonstrates that diagnosis is mainly reliant on the cross-sectional assessment of the way the person presents at that moment in time. It is also emphasized that the current presentation should represent a change from the person’s usual state, as this step helps to begin the process of differentiating illness episodes from long-standing personality traits. Clarifying the longitudinal history of any lifetime problems can help also to establish, for example, whether the person has previously experienced mania (in which case their diagnosis will be revised to bipolar disorder), or whether they have a history of chronic depression, with persistent symptoms that may be less severe but are nevertheless very debilitating (this is usually called dysthymia). In addition, it is important to assess whether the person has another mental or physical disorder as well as these may frequently co-occur with depression. […] In the absence of diagnostic tests, the current classifications still rely on expert consensus regarding symptom profiles.”

“In summary, for a classification system to have utility it needs to be reliable and valid. If a diagnosis is reliable doctors will all make the same diagnosis when they interview patients who present with the same set of symptoms. If a diagnosis has predictive validity it means that it is possible to forecast the future course of the illness in individuals with the same diagnosis and to anticipate their likely response to different treatments. For many decades, the lack of reliability so undermined the credibility of psychiatric diagnoses that most of the revisions of the classification systems between the 1950s and 2010 focused on improving diagnostic reliability. However, insufficient attention has been given to validity and until this is improved, the criteria used for diagnosing depressive disorders will continue to be regarded as somewhat arbitrary […]. Weaknesses in the systems for the diagnosis and classification of depression are frequently raised in discussions about the existence of depression as a separate entity and concerns about the rationale for treatment. It is notable that general medicine uses a similar approach to making decisions regarding the health–illness dimension. For example, levels of blood pressure exist on a continuum. However, when an individual’s blood pressure measurement reaches a predefined level, it is reported that the person now meets the criteria specified for the diagnosis of hypertension (high blood pressure). Depending on the degree of variation from the norm or average values for their age and gender, the person will be offered different interventions. […] This approach is widely accepted as a rational approach to managing this common physical health problem, yet a similar ‘stepped care’ approach to depression is often derided.”

“There are few differences in the nature of the symptoms experienced by men and women who are depressed, but there may be gender differences in how their distress is expressed or how they react to the symptoms. For example, men may be more likely to become withdrawn rather than to seek support from or confide in other people, they may become more outwardly hostile and have a greater tendency to use alcohol to try to cope with their symptoms. It is also clear that it may be more difficult for men to accept that they have a mental health problem and they are more likely to deny it, delay seeking help, or even to refuse help. […] becoming unemployed, retirement, and loss of a partner and change of social roles can all be risk factors for depression in men. In addition, chronic physical health problems or increasing disability may also act as a precipitant. The relationship between physical illness and depression is complex. When people are depressed they may subjectively report that their general health is worse than that of other people; likewise, people who are ill or in pain may react by becoming depressed. Certain medical problems such as an under-functioning thyroid gland (hypothyroidism) may produce symptoms that are virtually indistinguishable from depression. Overall, the rate of depression in individuals with a chronic physical disease is almost three times higher than those without such problems.”

“A long-standing problem in gathering data about suicide is that many religions and cultures regard it as a sin or an illegal act. This has had several consequences. For example, coroners and other public officials often strive to avoid identifying suspicious deaths as a suicide, meaning that the actual rates of suicide may be under-reported.”

“In Beck’s [depression] model, it is proposed that an individual’s interpretations of events or experiences are encapsulated in automatic thoughts, which arise immediately following the event or even at the same time. […] Beck suggested that these automatic thoughts occur at a conscious level and can be accessible to the individual, although they may not be actively aware of them because they are not concentrating on them. The appraisals that occur in specific situations largely determine the person’s emotional and behavioural responses […] [I]n depression, the content of a person’s thinking is dominated by negative views of themselves, their world, and their future (the so-called negative cognitive triad). Beck’s theory suggests that the themes included in the automatic thoughts are generated via the activation of underlying cognitive structures, called dysfunctional beliefs (or cognitive schemata). All individuals develop a set of rules or ‘silent assumptions’ derived from early learning experiences. Whilst automatic thoughts are momentary, event-specific cognitions, the underlying beliefs operate across a variety of situations and are more permanent. Most of the underlying beliefs held by the average individual are quite adaptive and guide our attempts to act and react in a considered way. Individuals at risk of depression are hypothesized to hold beliefs that are maladaptive and can have an unhelpful influence on them. […] faulty information processing contributes to further deterioration in a person’s mood, which sets up a vicious cycle with more negative mood increasing the risk of negative interpretations of day-to-day life experiences and these negative cognitions worsening the depressed mood. Beck suggested that the underlying beliefs that render an individual vulnerable to depression may be broadly categorized into beliefs about being helpless or unlovable. […] Beliefs about ‘the self’ seem especially important in the maintenance of depression, particularly when connected with low or variable self-esteem.”

“[U]nidimensional models, such as the monoamine hypothesis or the social origins of depression model, are important building blocks for understanding depression. However, in reality there is no one cause and no single pathway to depression and […] multiple factors increase vulnerability to depression. Whether or not someone at risk of depression actually develops the disorder is partly dictated by whether they are exposed to certain types of life events, the perceived level of threat or distress associated with those events (which in turn is influenced by cognitive and emotional reactions and temperament), their ability to cope with these experiences (their resilience or adaptability under stress), and the functioning of their biological stress-sensitivity systems (including the thresholds for switching on their body’s stress responses).”

Some links:

Humorism. Marsilio Ficino. Thomas Willis. William Cullen. Philippe Pinel. Benjamin Rush. Emil Kraepelin. Karl Leonhard. Sigmund Freud.
Depression.
Relation between depression and sociodemographic factors.
Bipolar disorder.
Postnatal depression. Postpartum psychosis.
Epidemiology of suicide. Durkheim’s typology of suicide.
Suicide methods.
Reserpine.
Neuroendocrine hypothesis of depression. HPA (Hypothalamic–Pituitary–Adrenal) axis.
Cognitive behavioral therapy.
Coping responses.
Brown & Harris (1978).
5-HTTLPR.

January 5, 2018 Posted by | Books, Medicine, Psychiatry, Psychology | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Child psychology

I was not impressed with this book, but as mentioned in the short review it was ‘not completely devoid of observations of interest’.

Before I start my proper coverage of the book, here are some related ‘observations’ from a different book I recently read, Bellwether:

““First we’re all going to play a game. Bethany, it’s Brittany’s birthday.” She attempted a game involving balloons with pink Barbies on them and then gave up and let Brittany open her presents. “Open Sandy’s first,” Gina said, handing her the book.
“No, Caitlin, these are Brittany’s presents.”
Brittany ripped the paper off Toads and Diamonds and looked at it blankly.
“That was my favorite fairy tale when I was little,” I said. “It’s about a girl who meets a good fairy, only she doesn’t, know it because the fairy’s in disguise—” but Brittany had already tossed it aside and was ripping open a Barbie doll in a glittery dress.
“Totally Hair Barbie!” she shrieked.
“Mine,” Peyton said, and made a grab that left Brittany holding nothing but Barbie’s arm.
“She broke Totally Hair Barbie!” Brittany wailed.
Peyton’s mother stood up and said calmly, “Peyton, I think you need a time-out.”
I thought Peyton needed a good swat, or at least to have Totally Hair Barbie taken away from her and given back to Brittany, but instead her mother led her to the door of Gina’s bedroom. “You can come out when you’re in control of your feelings,” she said to Peyton, who looked like she was in control to me.
“I can’t believe you’re still using time-outs,” Chelsea’s mother said. “Everybody’s using holding now.”
“Holding?” I asked.
“You hold the child immobile on your lap until the negative behavior stops. It produces a feeling of interceptive safety.”
“Really,” I said, looking toward the bedroom door. I would have hated trying to hold Peyton against her will.
“Holding’s been totally abandoned,” Lindsay’s mother said. “We use EE.”
“EE?” I said.
“Esteem Enhancement,” Lindsay’s mother said. “EE addresses the positive peripheral behavior no matter how negative the primary behavior is.”
“Positive peripheral behavior?” Gina said dubiously. “When Peyton took the Barbie away from Brittany just now,” Lindsay’s mother said, obviously delighted to explain, “you would have said, ‘My, Peyton, what an assertive grip you have.’”

[A little while later, during the same party:]

“My, Peyton,” Lindsay’s mother said, “what a creative thing to do with your frozen yogurt.””

Okay, on to the coverage of the book. I haven’t covered it in much detail, but I have included some observations of interest below.

“[O]ptimal development of grammar (knowledge about language structure) and phonology (knowledge about the sound elements in words) depends on the brain experiencing sufficient linguistic input. So quantity of language matters. The quality of the language used with young children is also important. The easiest way to extend the quality of language is with interactions around books. […] Natural conversations, focused on real events in the here and now, are those which are critical for optimal development. Despite this evidence, just talking to young children is still not valued strongly in many environments. Some studies find that over 60 per cent of utterances to young children are ‘empty language’ — phrases such as ‘stop that’, ‘don’t go there’, and ‘leave that alone’. […] studies of children who experience high levels of such ‘restricted language’ reveal a negative impact on later cognitive, social, and academic development.”

[Neural] plasticity is largely achieved by the brain growing connections between brain cells that are already there. Any environmental input will cause new connections to form. At the same time, connections that are not used much will be pruned. […] the consistency of what is experienced will be important in determining which connections are pruned and which are retained. […] Brains whose biology makes them less efficient in particular and measurable aspects of processing seem to be at risk in specific areas of development. For example, when auditory processing is less efficient, this can carry a risk of later language impairment.”

“Joint attention has […] been suggested to be the basis of ‘natural pedagogy’ — a social learning system for imparting cultural knowledge. Once attention is shared by adult and infant on an object, an interaction around that object can begin. That interaction usually passes knowledge from carer to child. This is an example of responsive contingency in action — the infant shows an interest in something, the carer responds, and there is an interaction which enables learning. Taking the child’s focus of attention as the starting point for the interaction is very important for effective learning. Of course, skilled carers can also engineer situations in which babies or children will become interested in certain objects. This is the basis of effective play-centred learning. Novel toys or objects are always interesting.”

“Some research suggests that the pitch and amplitude (loudness) of a baby’s cry has been developed by evolution to prompt immediate action by adults. Babies’ cries appear to be designed to be maximally stressful to hear.”

“[T]he important factors in becoming a ‘preferred attachment figure’ are proximity and consistency.”

“[A]dults modify their actions in important ways when they interact with infants. These modifications appear to facilitate learning. ‘Infant-directed action’ is characterized by greater enthusiasm, closer proximity to the infant, greater repetitiveness, and longer gaze to the face than interactions with another adult. Infant-directed action also uses simplified actions with more turn-taking. […] carers tend to use a special tone of voice to talk to babies. This is more sing-song and attention-grabbing than normal conversational speech, and is called ‘infant-directed speech’ [IDS] or ‘Parentese’. All adults and children naturally adopt this special tone when talking to a baby, and babies prefer to listen to Parentese. […] IDS […] heightens pitch, exaggerates the length of words, and uses extra stress, exaggerating the rhythmic or prosodic aspects of speech. […] the heightened prosody increases the salience of acoustic cues to where words begin and end. […] So as well as capturing attention, IDS is emphasizing key linguistic cues that help language acquisition. […] The infant brain seems to cope with the ‘learning problem’ of which sounds matter by initially being sensitive to all the sound elements used by the different world languages. Via acoustic learning during the first year of life, the brain then specializes in the sounds that matter for the particular languages that it is being exposed to.”

“While crawling makes it difficult to carry objects with you on your travels, learning to walk enables babies to carry things. Indeed, walking babies spend most of their time selecting objects and taking them to show their carer, spending on average 30–40 minutes per waking hour interacting with objects. […] Self-generated movement is seen as critical for child development. […] most falling is adaptive, as it helps infants to gain expertise. Indeed, studies show that newly walking infants fall on average 17 times per hour. From the perspective of child psychology, the importance of ‘motor milestones’ like crawling and walking is that they enable greater agency (self-initiated and self-chosen behaviour) on the part of the baby.”

“Statistical learning enables the brain to learn the statistical structure of any event or object. […] Statistical structure is learned in all sensory modalities simultaneously. For example, as the child learns about birds, the child will learn that light body weight, having feathers, having wings, having a beak, singing, and flying, all go together. Each bird that the child sees may be different, but each bird will share the features of flying, having feathers, having wings, and so on. […] The connections that form between the different brain cells that are activated by hearing, seeing, and feeling birds will be repeatedly strengthened for these shared features, thereby creating a multi-modal neural network for that particular concept. The development of this network will be dependent on everyday experiences, and the networks will be richer if the experiences are more varied. This principle of learning supports the use of multi-modal instruction and active experience in nursery and primary school. […] knowledge about concepts is distributed across the entire brain. It is not stored separately in a kind of conceptual ‘dictionary’ or distinct knowledge system. Multi-modal experiences strengthen learning across the whole brain. Accordingly, multisensory learning is the most effective kind of learning for young children.”

“Babies learn words most quickly when an adult both points to and names a new item.”

“…direct teaching of scientific reasoning skills helps children to reason logically independently of their pre-existing beliefs. This is more difficult than it sounds, as pre-existing beliefs exert strong effects. […] in many social situations we are advantaged if we reason on the basis of our pre-existing beliefs. This is one reason that stereotypes form”. [Do remember on a related note that stereotype accuracy is one of the largest and most replicable effects in all of social psychology – US].

“Some gestures have almost universal meaning, like waving goodbye. Babies begin using gestures like this quite early on. Between 10 and 18 months of age, gestures become frequent and are used extensively for communication. […] After around 18 months, the use of gesture starts declining, as vocalization becomes more and more dominant in communication. […] By [that time], most children are entering the two-word stage, when they become able to combine words. […] At this age, children often use a word that they know to refer to many different entities whose names are not yet known. They might use the word ‘bee’ for insects that are not bees, or the word ‘dog’ to refer to horses and cows. Experiments have shown that this is not a semantic confusion. Toddlers do not think that horses and cows are a type of dog. Rather, they have limited language capacities, and so they stretch their limited vocabularies to communicate as flexibly as possible. […] there is a lot of similarity across cultures at the two-word stage regarding which words are combined. Young children combine words to draw attention to objects (‘See doggie!’), to indicate ownership (‘My shoe’), to point out properties of objects (‘Big doggie’), to indicate plurality (‘Two cookie’), and to indicate recurrence (‘Other cookie’). […] It is only as children learn grammar that some divergence is found across languages. This is probably because different languages have different grammatical formats for combining words. […] grammatical learning emerges naturally from extensive language experience (of the utterances of others) and from language use (the novel utterances of the child, which are re-formulated by conversational partners if they are grammatically incorrect).”

“The social and communicative functions of language, and children’s understanding of them, are captured by pragmatics. […] pragmatic aspects of conversation include taking turns, and making sure that the other person has sufficient knowledge of the events being discussed to follow what you are saying. […] To learn about pragmatics, children need to go beyond the literal meaning of the words and make inferences about communicative intent. A conversation is successful when a child has recognized the type of social situation and applied the appropriate formula. […] Children with autism, who have difficulties with social cognition and in reading the mental states of others, find learning the pragmatics of conversation particularly difficult. […] Children with autism often show profound delays in social understanding and do not ‘get’ many social norms. These children may behave quite inappropriately in social settings […] Children with autism may also show very delayed understanding of emotions and of intentions. However, this does not make them anti-social, rather it makes them relatively ineffective at being pro-social.”

“When children have siblings, there are usually developmental advantages for social cognition and psychological understanding. […] Discussing the causes of disputes appears to be particularly important for developing social understanding. Young children need opportunities to ask questions, argue with explanations, and reflect on why other people behave in the way that they do. […] Families that do not talk about the intentions and emotions of others and that do not explicitly discuss social norms will create children with reduced social understanding.”

“[C]hildren, like adults, are more likely to act in pro-social ways to ingroup members. […] Social learning of cultural ‘ingroups’ appears to develop early in children as part of general socio-moral development. […] being loyal to one’s ‘ingroup’ is likely to make the child more popular with the other members of that group. Being in a group thus requires the development of knowledge about how to be loyal, about conforming to pressure and about showing ingroup bias. For example, children may need to make fine judgements about who is more popular within the group, so that they can favour friends who are more likely to be popular with the rest of the group. […] even children as young as 6 years will show more positive responding to the transgression of social rules by ingroup members compared to outgroup members, particularly if they have relatively well-developed understanding of emotions and intentions.”

“Good language skills improve memory, because children with better language skills are able to construct narratively coherent and extended, temporally organized representations of experienced events.”

“Once children begin reading, […] letter-sound knowledge and ‘phonemic awareness’ (the ability to divide words into the single sound elements represented by letters) become the most important predictors of reading development. […] phonemic awareness largely develops as a consequence of being taught to read and write. Research shows that illiterate adults do not have phonemic awareness. […] brain imaging shows that learning to read ‘re-maps’ phonology in the brain. We begin to hear words as sequences of ‘phonemes’ only after we learn to read.”

October 29, 2017 Posted by | Books, Language, Neurology, Psychology | Leave a comment

A few diabetes papers of interest

i. Neurocognitive Functioning in Children and Adolescents at the Time of Type 1 Diabetes Diagnosis: Associations With Glycemic Control 1 Year After Diagnosis.

“Children and youth with type 1 diabetes are at risk for developing neurocognitive dysfunction, especially in the areas of psychomotor speed, attention/executive functioning, and visuomotor integration (1,2). Most research suggests that deficits emerge over time, perhaps in response to the cumulative effect of glycemic extremes (36). However, the idea that cognitive changes emerge gradually has been challenged (79). Ryan (9) argued that if diabetes has a cumulative effect on cognition, cognitive test performance should be positively correlated with illness duration. Yet he found comparable deficits in psychomotor speed (the most commonly noted area of deficit) in adolescents and young adults with illness duration ranging from 6 to 25 years. He therefore proposed a diathesis model in which cognitive declines in diabetes are especially likely to occur in more vulnerable patients, at crucial periods, in response to illness-related events (e.g., severe hyperglycemia) known to have an impact on the central nervous system (CNS) (8). This model accounts for the finding that cognitive deficits are more likely in children with early-onset diabetes, and for the accelerated cognitive aging seen in diabetic individuals later in life (7). A third hypothesized crucial period is the time leading up to diabetes diagnosis, during which severe fluctuations in blood glucose and persistent hyperglycemia often occur. Concurrent changes in blood-brain barrier permeability could result in a flood of glucose into the brain, with neurotoxic effects (9).”

“In the current study, we report neuropsychological test findings for children and adolescents tested within 3 days of diabetes diagnosis. The purpose of the study was to determine whether neurocognitive impairments are detectable at diagnosis, as predicted by the diathesis hypothesis. We hypothesized that performance on tests of psychomotor speed, visuomotor integration, and attention/executive functioning would be significantly below normative expectations, and that differences would be greater in children with earlier disease onset. We also predicted that diabetic ketoacidosis (DKA), a primary cause of diabetes-related neurological morbidity (12) and a likely proxy for severe peri-onset hyperglycemia, would be associated with poorer performance.”

“Charts were reviewed for 147 children/adolescents aged 5–18 years (mean = 10.4 ± 3.2 years) who completed a short neuropsychological screening during their inpatient hospitalization for new-onset type 1 diabetes, as part of a pilot clinical program intended to identify patients in need of further neuropsychological evaluation. Participants were patients at a large urban children’s hospital in the southwestern U.S. […] Compared with normative expectations, children/youth with type 1 diabetes performed significantly worse on GPD, GPN, VMI, and FAS (P < 0.0001 in all cases), with large decrements evident on all four measures (Fig. 1). A small but significant effect was also evident in DSB (P = 0.022). High incidence of impairment was evident on all neuropsychological tasks completed by older participants (aged 9–18 years) except DSF/DSB (Fig. 2).”

“Deficits in neurocognitive functioning were evident in children and adolescents within days of type 1 diabetes diagnosis. Participants performed >1 SD below normative expectations in bilateral psychomotor speed (GP) and 0.7–0.8 SDs below expected performance in visuomotor integration (VMI) and phonemic fluency (FAS). Incidence of impairment was much higher than normative expectations on all tasks except DSF/DSB. For example, >20% of youth were impaired in dominant hand fine-motor control, and >30% were impaired with their nondominant hand. These findings provide provisional support for Ryan’s hypothesis (79) that the peri-onset period may be a time of significant cognitive vulnerability.

Importantly, deficits were not evident on all measures. Performance on measures of attention/executive functioning (TMT-A, TMT-B, DSF, and DSB) was largely consistent with normative expectations, as was reading ability (WRAT-4), suggesting that the below-average performance in other areas was not likely due to malaise or fatigue. Depressive symptoms at diagnosis were associated with performance on TMT-B and FAS, but not on other measures. Thus, it seems unlikely that depressive symptoms accounted for the observed motor slowing.

Instead, the findings suggest that the visual-motor system may be especially vulnerable to early effects of type 1 diabetes. This interpretation is especially compelling given that psychomotor impairment is the most consistently reported long-term cognitive effect of type 1 diabetes. The sensitivity of the visual-motor system at diabetes diagnosis is consistent with a growing body of neuroimaging research implicating posterior white matter tracts and associated gray matter regions (particularly cuneus/precuneus) as areas of vulnerability in type 1 diabetes (3032). These regions form part of the neural system responsible for integrating visual inputs with motor outputs, and in adults with type 1 diabetes, structural pathology in these regions is directly correlated to performance on GP [grooved pegboard test] (30,31). Arbelaez et al. (33) noted that these brain areas form part of the “default network” (34), a system engaged during internally focused cognition that has high resting glucose metabolism and may be especially vulnerable to glucose variability.”

“It should be noted that previous studies (e.g., Northam et al. [3]) have not found evidence of neurocognitive dysfunction around the time of diabetes diagnosis. This may be due to study differences in measures, outcomes, and/or time frame. We know of no other studies that completed neuropsychological testing within days of diagnosis. Given our time frame, it is possible that our findings reflect transient effects rather than more permanent changes in the CNS. Contrary to predictions, we found no association between DKA at diagnosis and neurocognitive performance […] However, even transient effects could be considered potential indicators of CNS vulnerability. Neurophysiological changes at the time of diagnosis have been shown to persist under certain circumstances or for some patients. […] [Some] findings suggest that some individuals may be particularly susceptible to the effects of glycemic extremes on neurocognitive function, consistent with a large body of research in developmental neuroscience indicating individual differences in neurobiological vulnerability to adverse events. Thus, although it is possible that the neurocognitive impairments observed in our study might resolve with euglycemia, deficits at diagnosis could still be considered a potential marker of CNS vulnerability to metabolic perturbations (both acute and chronic).”

“In summary, this study provides the first demonstration that type 1 diabetes–associated neurocognitive impairment can be detected at the time of diagnosis, supporting the possibility that deficits arise secondary to peri-onset effects. Whether these effects are transient markers of vulnerability or represent more persistent changes in CNS awaits further study.”

ii. Association Between Impaired Cardiovascular Autonomic Function and Hypoglycemia in Patients With Type 1 Diabetes.

“Cardiovascular autonomic neuropathy (CAN) is a chronic complication of diabetes and an independent predictor of cardiovascular disease (CVD) morbidity and mortality (13). The mechanisms of CAN are complex and not fully understood. It can be assessed by simple cardiovascular reflex tests (CARTs) and heart rate variability (HRV) studies that were shown to be sensitive, noninvasive, and reproducible (3,4).”

“HbA1c fails to capture information on the daily fluctuations in blood glucose levels, termed glycemic variability (GV). Recent observations have fostered the notion that GV, independent of HbA1c, may confer an additional risk for the development of micro- and macrovascular diabetes complications (8,9). […] the relationship between GV and chronic complications, specifically CAN, in patients with type 1 diabetes has not been systematically studied. In addition, limited data exist on the relationship between hypoglycemic components of the GV and measures of CAN among subjects with type 1 diabetes (11,12). Therefore, we have designed a prospective study to evaluate the impact and the possible sustained effects of GV on measures of cardiac autonomic function and other cardiovascular complications among subjects with type 1 diabetes […] In the present communication, we report cross-sectional analyses at baseline between indices of hypoglycemic stress on measures of cardiac autonomic function.”

“The following measures of CAN were predefined as outcomes of interests and analyzed: expiration-to-inspiration ratio (E:I), Valsalva ratio, 30:15 ratios, low-frequency (LF) power (0.04 to 0.15 Hz), high-frequency (HF) power (0.15 to 0.4 Hz), and LF/HF at rest and during CARTs. […] We found that LBGI [low blood glucose index] and AUC [area under the curve] hypoglycemia were associated with reduced LF and HF power of HRV [heart rate variability], suggesting an impaired autonomic function, which was independent of glucose control as assessed by the HbA1c.”

“Our findings are in concordance with a recent report demonstrating attenuation of the baroreflex sensitivity and of the sympathetic response to various cardiovascular stressors after antecedent hypoglycemia among healthy subjects who were exposed to acute hypoglycemic stress (18). Similar associations […] were also reported in a small study of subjects with type 2 diabetes (19). […] higher GV and hypoglycemic stress may have an acute effect on modulating autonomic control with inducing a sympathetic/vagal imbalance and a blunting of the cardiac vagal control (18). The impairment in the normal counter-regulatory autonomic responses induced by hypoglycemia on the cardiovascular system could be important in healthy individuals but may be particularly detrimental in individuals with diabetes who have hitherto compromised cardiovascular function and/or subclinical CAN. In these individuals, hypoglycemia may also induce QT interval prolongation, increase plasma catecholamine levels, and lower serum potassium (19,20). In concert, these changes may lower the threshold for serious arrhythmia (19,20) and could result in an increased risk of cardiovascular events and sudden cardiac death. Conversely, the presence of CAN may increase the risk of hypoglycemia through hypoglycemia unawareness and subsequent impaired ability to restore euglycemia (21) through impaired sympathoadrenal response to hypoglycemia or delayed gastric emptying. […] A possible pathogenic role of GV/hypoglycemic stress on CAN development and progressions should be also considered. Prior studies in healthy and diabetic subjects have found that higher exposure to hypoglycemia reduces the counter-regulatory hormone (e.g., epinephrine, glucagon, and adrenocorticotropic hormone) and blunts autonomic nervous system responses to subsequent hypoglycemia (21). […] Our data […] suggest that wide glycemic fluctuations, particularly hypoglycemic stress, may increase the risk of CAN in patients with type 1 diabetes.”

“In summary, in this cohort of relatively young and uncomplicated patients with type 1 diabetes, GV and higher hypoglycemic stress were associated with impaired HRV reflective of sympathetic/parasympathetic dysfunction with potential important clinical consequences.”

iii. Elevated Levels of hs-CRP Are Associated With High Prevalence of Depression in Japanese Patients With Type 2 Diabetes: The Diabetes Distress and Care Registry at Tenri (DDCRT 6).

“In the last decade, several studies have been published that suggest a close association between diabetes and depression. Patients with diabetes have a high prevalence of depression (1) […] and a high prevalence of complications (3). In addition, depression is associated with mortality in these patients (4). […] Because of this strong association, several recent studies have suggested the possibility of a common biological pathway such as inflammation as an underlying mechanism of the association between depression and diabetes (5). […] Multiple mechanisms are involved in the association between diabetes and inflammation, including modulation of lipolysis, alteration of glucose uptake by adipose tissue, and an indirect mechanism involving an increase in free fatty acid levels blocking the insulin signaling pathway (10). Psychological stress can also cause inflammation via innervation of cytokine-producing cells and activation of the sympathetic nervous systems and adrenergic receptors on macrophages (11). Depression enhances the production of inflammatory cytokines (1214). Overproduction of inflammatory cytokines may stimulate corticotropin-releasing hormone production, a mechanism that leads to hypothalamic-pituitary axis activity. Conversely, cytokines induce depressive-like behaviors; in studies where healthy participants were given endotoxin infusions to trigger cytokine release, the participants developed classic depressive symptoms (15). Based on this evidence, it could be hypothesized that inflammation is the common biological pathway underlying the association between diabetes and depression.”

“[F]ew studies have examined the clinical role of inflammation and depression as biological correlates in patients with diabetes. […] In this study, we hypothesized that high CRP [C-reactive protein] levels were associated with the high prevalence of depression in patients with diabetes and that this association may be modified by obesity or glycemic control. […] Patient data were derived from the second-year survey of a diabetes registry at Tenri Hospital, a regional tertiary care teaching hospital in Japan. […] 3,573 patients […] were included in the study. […] Overall, mean age, HbA1c level, and BMI were 66.0 years, 7.4% (57.8 mmol/mol), and 24.6 kg/m2, respectively. Patients with major depression tended to be relatively young […] and female […] with a high BMI […], high HbA1c levels […], and high hs-CRP levels […]; had more diabetic nephropathy […], required more insulin therapy […], and exercised less […]”.

“In conclusion, we observed that hs-CRP levels were associated with a high prevalence of major depression in patients with type 2 diabetes with a BMI of ≥25 kg/m2. […] In patients with a BMI of <25 kg/m2, no significant association was found between hs-CRP quintiles and major depression […] We did not observe a significant association between hs-CRP and major depression in either of HbA1c subgroups. […] Our results show that the association between hs-CRP and diabetes is valid even in an Asian population, but it might not be extended to nonobese subjects. […] several factors such as obesity and glycemic control may modify the association between inflammation and depression. […] Obesity is strongly associated with chronic inflammation.”

iv. A Novel Association Between Nondipping and Painful Diabetic Polyneuropathy.

“Sleep problems are common in painful diabetic polyneuropathy (PDPN) (1) and contribute to the effect of pain on quality of life. Nondipping (the absence of the nocturnal fall in blood pressure [BP]) is a recognized feature of diabetic cardiac autonomic neuropathy (CAN) and is attributed to the abnormal prevalence of nocturnal sympathetic activity (2). […] This study aimed to evaluate the relationship of the circadian pattern of BP with both neuropathic pain and pain-related sleep problems in PDPN […] Investigating the relationship between PDPN and BP circadian pattern, we found patients with PDPN exhibited impaired nocturnal decrease in BP compared with those without neuropathy, as well as higher nocturnal systolic BP than both those without DPN and with painless DPN. […] in multivariate analysis including comorbidities and most potential confounders, neuropathic pain was an independent determinant of ∆ in BP and nocturnal systolic BP.”

“PDPN could behave as a marker for the presence and severity of CAN. […] PDPN should increasingly be regarded as a condition of high cardiovascular risk.”

v. Reduced Testing Frequency for Glycated Hemoglobin, HbA1c, Is Associated With Deteriorating Diabetes Control.

I think a potentially important take-away from this paper, which they don’t really talk about, is that when you’re analyzing time series data in research contexts where the HbA1c variable is available at the individual level at some base frequency and you then encounter individuals for whom the HbA1c variable is unobserved in such a data set for some time periods/is not observed at the frequency you’d expect, such (implicit) missing values may not be missing at random (for more on these topics see e.g. this post). More specifically, in light of the findings of this paper I think it would make a lot of sense to default to an assumption of missing values being an indicator of worse-than-average metabolic control during the unobserved period of the time series in question when doing time-to-event analyses, especially in contexts where the values are missing for an extended period of time.

The authors of the paper consider metabolic control an outcome to be explained by the testing frequency. That’s one way to approach these things, but it’s not the only one and I think it’s also important to keep in mind that some patients also sometimes make a conscious decision not to show up for their appointments/tests; i.e. the testing frequency is not necessarily fully determined by the medical staff, although they of course have an important impact on this variable.

Some observations from the paper:

“We examined repeat HbA1c tests (400,497 tests in 79,409 patients, 2008–2011) processed by three U.K. clinical laboratories. We examined the relationship between retest interval and 1) percentage change in HbA1c and 2) proportion of cases showing a significant HbA1c rise. The effect of demographics factors on these findings was also explored. […] Figure 1 shows the relationship between repeat requesting interval (categorized in 1-month intervals) and percentage change in HbA1c concentration in the total data set. From 2 months onward, there was a direct relationship between retesting interval and control. A testing frequency of >6 months was associated with deterioration in control. The optimum testing frequency in order to maximize the downward trajectory in HbA1c between two tests was approximately four times per year. Our data also indicate that testing more frequently than 2 months has no benefit over testing every 2–4 months. Relative to the 2–3 month category, all other categories demonstrated statistically higher mean change in HbA1c (all P < 0.001). […] similar patterns were observed for each of the three centers, with the optimum interval to improvement in overall control at ∼3 months across all centers.”

“[I]n patients with poor control, the pattern was similar to that seen in the total group, except that 1) there was generally a more marked decrease or more modest increase in change of HbA1c concentration throughout and, consequently, 2) a downward trajectory in HbA1c was observed when the interval between tests was up to 8 months, rather than the 6 months as seen in the total group. In patients with a starting HbA1c of <6% (<42 mmol/mol), there was a generally linear relationship between interval and increase in HbA1c, with all intervals demonstrating an upward change in mean HbA1c. The intermediate group showed a similar pattern as those with a starting HbA1c of <6% (<42 mmol/mol), but with a steeper slope.”

“In order to examine the potential link between monitoring frequency and the risk of major deterioration in control, we then assessed the relationship between testing interval and proportion of patients demonstrating an increase in HbA1c beyond the normal biological and analytical variation in HbA1c […] Using this definition of significant increase as a ≥9.9% rise in subsequent HbA1c, our data show that the proportion of patients showing this magnitude of rise increased month to month, with increasing intervals between tests for each of the three centers. […] testing at 2–3-monthly intervals would, at a population level, result in a marked reduction in the proportion of cases demonstrating a significant increase compared with annual testing […] irrespective of the baseline HbA1c, there was a generally linear relationship between interval and the proportion demonstrating a significant increase in HbA1c, though the slope of this relationship increased with rising initial HbA1c.”

“Previous data from our and other groups on requesting patterns indicated that relatively few patients in general practice were tested annually (5,6). […] Our data indicate that for a HbA1c retest interval of more than 2 months, there was a direct relationship between retesting interval and control […], with a retest frequency of greater than 6 months being associated with deterioration in control. The data showed that for diabetic patients as a whole, the optimum repeat testing interval should be four times per year, particularly in those with poorer diabetes control (starting HbA1c >7% [≥53 mmol/mol]). […] The optimum retest interval across the three centers was similar, suggesting that our findings may be unrelated to clinical laboratory factors, local policies/protocols on testing, or patient demographics.”

It might be important to mention that there are important cross-country differences in terms of how often people with diabetes get HbA1c measured – I’m unsure of whether or not standards have changed since then, but at least in Denmark a specific treatment goal of the Danish Regions a few years ago was whether or not 95% of diabetics had had their HbA1c measured within the last year (here’s a relevant link to some stuff I wrote about related topics a while back).

October 2, 2017 Posted by | Cardiology, Diabetes, Immunology, Medicine, Neurology, Psychology, Statistics, Studies | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

Depression and Heart Disease (II)

Below I have added some more observations from the book, which I gave four stars on goodreads.

“A meta-analysis of twin (and family) studies estimated the heritability of adult MDD around 40% [16] and this estimate is strikingly stable across different countries [17, 18]. If measurement error due to unreliability is taken into account by analysing MDD assessed on two occasions, heritability estimates increase to 66% [19]. Twin studies in children further show that there is already a large genetic contribution to depressive symptoms in youth, with heritability estimates varying between 50% and 80% [20–22]. […] Cardiovascular research in twin samples has suggested a clear-cut genetic contribution to hypertension (h2 = 61%) [30], fatal stroke (h2 = 32%) [31] and CAD (h2 = 57% in males and 38% in females) [32]. […] A very important, and perhaps underestimated, source of pleiotropy in the association of MDD and CAD are the major behavioural risk factors for CAD: smoking and physical inactivity. These factors are sometimes considered ‘environmental’, but twin studies have shown that such behaviours have a strong genetic component [33–35]. Heritability estimates for [many] established risk factors [for CAD – e.g. BMI, smoking, physical inactivity – US] are 50% or higher in most adult twin samples and these estimates remain remarkably similar across the adult life span [41–43].”

“The crucial question is whether the genetic factors underlying MDD also play a role in CAD and CAD risk factors. To test for an overlap in the genetic factors, a bivariate extension of the structural equation model for twin data can be used [57]. […] If the depressive symptoms in a twin predict the IL-6 level in his/her co-twin, this can only be explained by an underlying factor that affects both depression and IL-6 levels and is shared by members of a family. If the prediction is much stronger in MZ than in DZ twins, this signals that the underlying factor is their shared genetic make-up, rather than their shared (family) environment. […] It is important to note clearly here that genetic correlations do not prove the existence of pleiotropy, because genes that influence MDD may, through causal effects of MDD on CAD risk, also become ‘CAD genes’. The absence of a genetic correlation, however, can be used to falsify the existence of genetic pleiotropy. For instance, the hypothesis that genetic pleiotropy explains part of the association between depressive symptoms and IL-6 requires the genetic correlation between these traits to be significantly different from zero. [Furthermore,] the genetic correlation should have a positive value. A negative genetic correlation would signal that genes that increase the risk for depression decrease the risk for higher IL-6 levels, which would go against the genetic pleiotropy hypothesis. […] Su et al. [26] […] tested pleiotropy as a possible source of the association of depressive symptoms with Il-6 in 188 twin pairs of the Vietnam Era Twin (VET) Registry. The genetic correlation between depressive symptoms and IL-6 was found to be positive and significant (RA = 0.22, p = 0.046)”

“For the association between MDD and physical inactivity, the dominant hypothesis has not been that MDD causes a reduction in regular exercise, but instead that regular exercise may act as a protective factor against mood disorders. […] we used the twin method to perform a rigorous test of this popular hypothesis [on] 8558 twins and their family members using their longitudinal data across 2-, 4-, 7-, 9- and 11-year follow-up periods. In spite of sufficient statistical power, we found only the genetic correlation to be significant (ranging between *0.16 and *0.44 for different symptom scales and different time-lags). The environmental correlations were essentially zero. This means that the environmental factors that cause a person to take up exercise do not cause lower anxiety or depressive symptoms in that person, currently or at any future time point. In contrast, the genetic factors that cause a person to take up exercise also cause lower anxiety or depressive symptoms in that person, at the present and all future time points. This pattern of results falsifies the causal hypothesis and leaves genetic pleiotropy as the most likely source for the association between exercise and lower levels of anxiety and depressive symptoms in the population at large. […] Taken together, [the] studies support the idea that genetic pleiotropy may be a factor contributing to the increased risk for CAD in subjects suffering from MDD or reporting high counts of depressive symptoms. The absence of environmental correlations in the presence of significant genetic correlations for a number of the CAD risk factors (CFR, cholesterol, inflammation and regular exercise) suggests that pleiotropy is the sole reason for the association between MDD and these CAD risk factors, whereas for other CAD risk factors (e.g. smoking) and CAD incidence itself, pleiotropy may coexist with causal effects.”

“By far the most tested polymorphism in psychiatric genetics is a 43-base pair insertion or deletion in the promoter region of the serotonin transporter gene (5HTT, renamed SLC6A4). About 55% of Caucasians carry a long allele (L) with 16 repeat units. The short allele (S, with 14 repeat units) of this length polymorphism repeat (LPR) reduces transcriptional efficiency, resulting in decreased serotonin transporter expression and function [83]. Because serotonin plays a key role in one of the major theories of MDD [84], and because the most prescribed antidepressants act directly on this transporter, 5HTT is an obvious candidate gene for this disorder. […] The dearth of studies attempting to associate the 5HTTLPR to MDD or related personality traits tells a revealing story about the fate of most candidate genes in psychiatric genetics. Many conflicting findings have been reported, and the two largest studies failed to link the 5HTTLPR to depressive symptoms or clinical MDD [85, 86]. Even at the level of reviews and meta-analyses, conflicting conclusions have been drawn about the role of this polymorphism in the development of MDD [87, 88]. The initially promising explanation for discrepant findings – potential interactive effects of the 5HTTLPR and stressful life events [89] – did not survive meta-analysis [90].”

“Across the board, overlooking the wealth of candidate gene studies on MDD, one is inclined to conclude that this approach has failed to unambiguously identify genetic variants involved in MDD […]. Hope is now focused on the newer GWA [genome wide association] approach. […] At the time of writing, only two GWA studies had been published on MDD [81, 95]. […] In theory, the strategy to identify potential pleiotropic genes in the MDD–CAD relationship is extremely straightforward. We simply select the genes that occur in the lists of confirmed genes from the GWA studies for both traits. In practice, this is hard to do, because genetics in psychiatry is clearly lagging behind genetics in cardiology and diabetes medicine. […] What is shown by the reviewed twin studies is that some genetic variants may influence MDD and CAD risk factors. This can occur through one of three mechanisms: (a) the genetic variants that increase the risk for MDD become part of the heritability of CAD through a causal effect of MDD on CAD risk factors (causality); (b) the genetic variants that increase the risk for CAD become part of the heritability of MDD through a direct causal effect of CAD on MDD (reverse causality); (c) the genetic variants influence shared risk factors that independently increase the risk for MDD as well as CAD (pleiotropy). I suggest that to fully explain the MDD–CAD association we need to be willing to be open to the possibility that these three mechanisms co-exist. Even in the presence of true pleiotropic effects, MDD may influence CAD risk factors, and having CAD in turn may worsen the course of MDD.”

“Patients with depression are more likely to exhibit several unhealthy behaviours or avoid other health-promoting ones than those without depression. […] Patients with depression are more likely to have sleep disturbances [6]. […] sleep deprivation has been linked with obesity, diabetes and the metabolic syndrome [13]. […] Physical inactivity and depression display a complex, bidirectional relationship. Depression leads to physical inactivity and physical inactivity exacerbates depression [19]. […] smoking rates among those with depression are about twice that of the general population [29]. […] Poor attention to self-care is often a problem among those with major depressive disorder. In the most severe cases, those with depression may become inattentive to their personal hygiene. One aspect of this relationship that deserves special attention with respect to cardiovascular disease is the association of depression and periodontal disease. […] depression is associated with poor adherence to medical treatment regimens in many chronic illnesses, including heart disease. […] There is some evidence that among patients with an acute coronary syndrome, improvement in depression is associated with improvement in adherence. […] Individuals with depression are often socially withdrawn or isolated. It has been shown that patients with heart disease who are depressed have less social support [64], and that social isolation or poor social support is associated with increased mortality in heart disease patients [65–68]. […] [C]linicians who make recommendations to patients recovering from a heart attack should be aware that low levels of social support and social isolation are particularly common among depressed individuals and that high levels of social support appear to protect patients from some of the negative effects of depression [78].”

“Self-efficacy describes an individual’s self-confidence in his/her ability to accomplish a particular task or behaviour. Self-efficacy is an important construct to consider when one examines the psychological mechanisms linking depression and heart disease, since it influences an individual’s engagement in behaviour and lifestyle changes that may be critical to improving cardiovascular risk. Many studies on individuals with chronic illness show that depression is often associated with low self-efficacy [95–97]. […] Low self-efficacy is associated with poor adherence behaviour in patients with heart failure [101]. […] Much of the interest in self-efficacy comes from the fact that it is modifiable. Self-efficacy-enhancing interventions have been shown to improve cardiac patients’ self-efficacy and thereby improve cardiac health outcomes [102]. […] One problem with targeting self-efficacy in depressed heart disease patients is [however] that depressive symptoms reduce the effects of self-efficacy-enhancing interventions [105, 106].”

“Taken together, [the] SADHART and ENRICHD [studies] suggest, but do not prove, that antidepressant drug therapy in general, and SSRI treatment in particular, improve cardiovascular outcomes in depressed post-acute coronary syndrome (ACS) patients. […] even large epidemiological studies of depression and antidepressant treatment are not usually informative, because they confound the effects of depression and antidepressant treatment. […] However, there is one Finnish cohort study in which all subjects […] were followed up through a nationwide computerised database [17]. The purpose of this study was not to examine the relationship between depression and cardiac mortality, but rather to look at the relationship between antidepressant use and suicide. […] unexpectedly, ‘antidepressant use, and especially SSRI use, was associated with a marked reduction in total mortality (=49%, p < 0.001), mostly attributable to a decrease in cardiovascular deaths’. The study involved 15 390 patients with a mean follow-up of 3.4 years […] One of the marked differences between the SSRIs and the earlier tricyclic antidepressants is that the SSRIs do not cause cardiac death in overdose as the tricyclics do [41]. There has been literature that suggested that tricyclics even at therapeutic doses could be cardiotoxic and more problematic than SSRIs [42, 43]. What has been surprising is that both in the clinical trial data from ENRICHD and the epidemiological data from Finland, tricyclic treatment has also been associated with a decreased risk of mortality. […] Given that SSRI treatment of depression in the post-ACS period is safe, effective in reducing depressed mood, able to improve health behaviours and may reduce subsequent cardiac morbidity and mortality, it would seem obvious that treating depression is strongly indicated. However, the vast majority of post-ACS patients will not see a psychiatrically trained professional and many cases are not identified [33].”

“That depression is associated with cardiovascular morbidity and mortality is no longer open to question. Similarly, there is no question that the risk of morbidity and mortality increases with increasing severity of depression. Questions remain about the mechanisms that underlie this association, whether all types of depression carry the same degree of risk and to what degree treating depression reduces that risk. There is no question that the benefits of treating depression associated with coronary artery disease far outweigh the risks.”

“Two competing trends are emerging in research on psychotherapy for depression in cardiac patients. First, the few rigorous RCTs that have been conducted so far have shown that even the most efficacious of the current generation of interventions produce relatively modest outcomes. […] Second, there is a growing recognition that, even if an intervention is highly efficacious, it may be difficult to translate into clinical practice if it requires intensive or extensive contacts with a highly trained, experienced, clinically sophisticated psychotherapist. It can even be difficult to implement such interventions in the setting of carefully controlled, randomised efficacy trials. Consequently, there are efforts to develop simpler, more efficient interventions that can be delivered by a wider variety of interventionists. […] Although much more work remains to be done in this area, enough is already known about psychotherapy for comorbid depression in heart disease to suggest that a higher priority should be placed on translation of this research into clinical practice. In many cases, cardiac patients do not receive any treatment for their depression.”

August 14, 2017 Posted by | Books, Cardiology, Diabetes, Genetics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Depression and Heart Disease (I)

I’m currently reading this book. It’s a great book, with lots of interesting observations.

Below I’ve added some quotes from the book.

“Frasure-Smith et al. [1] demonstrated that patients diagnosed with depression post MI [myocardial infarction, US] were more than five times more likely to die from cardiac causes by 6 months than those without major depression. At 18 months, cardiac mortality had reached 20% in patients with major depression, compared with only 3% in non-depressed patients [5]. Recent work has confirmed and extended these findings. A meta-analysis of 22 studies of post-MI subjects found that post-MI depression was associated with a 2.0–2.5 increased risk of negative cardiovascular outcomes [6]. Another meta-analysis examining 20 studies of subjects with MI, coronary artery bypass graft (CABG), angioplasty or angiographically documented CAD found a twofold increased risk of death among depressed compared with non-depressed patients [7]. Though studies included in these meta-analyses had substantial methodological variability, the overall results were quite similar [8].”

“Blumenthal et al. [31] published the largest cohort study (N = 817) to date on depression in patients undergoing CABG and measured depression scores, using the CES-D, before and at 6 months after CABG. Of those patients, 26% had minor depression (CES-D score 16–26) and 12% had moderate to severe depression (CES-D score =27). Over a mean follow-up of 5.2 years, the risk of death, compared with those without depression, was 2.4 (HR adjusted; 95% CI 1.4, 4.0) in patients with moderate to severe depression and 2.2 (95% CI 1.2, 4.2) in those whose depression persisted from baseline to follow-up at 6 months. This is one of the few studies that found a dose response (in terms of severity and duration) between depression and death in CABG in particular and in CAD in general.”

“Of the patients with known CAD but no recent MI, 12–23% have major depressive disorder by DSM-III or DSM-IV criteria [20, 21]. Two studies have examined the prognostic association of depression in patients whose CAD was confirmed by angiography. […] In [Carney et al.], a diagnosis of major depression by DSM-III criteria was the best predictor of cardiac events (MI, bypass surgery or death) at 1 year, more potent than other clinical risk factors such as impaired left ventricular function, severity of coronary disease and smoking among the 52 patients. The relative risk of a cardiac event was 2.2 times higher in patients with major depression than those with no depression.[…] Barefoot et al. [23] provided a larger sample size and longer follow-up duration in their study of 1250 patients who had undergone their first angiogram. […] Compared with non-depressed patients, those who were moderately to severely depressed had 69% higher odds of cardiac death and 78% higher odds of all-cause mortality. The mildly depressed had a 38% higher risk of cardiac death and a 57% higher risk of all-cause mortality than non-depressed patients.”

“Ford et al. [43] prospectively followed all male medical students who entered the Johns Hopkins Medical School from 1948 to 1964. At entry, the participants completed questionnaires about their personal and family history, health status and health behaviour, and underwent a standard medical examination. The cohort was then followed after graduation by mailed, annual questionnaires. The incidence of depression in this study was based on the mailed surveys […] 1190 participants [were included in the] analysis. The cumulative incidence of clinical depression in this population at 40 years of follow-up was 12%, with no evidence of a temporal change in the incidence. […] In unadjusted analysis, clinical depression was associated with an almost twofold higher risk of subsequent CAD. This association remained after adjustment for time-dependent covariates […]. The relative risk ratio for CAD development with versus without clinical depression was 2.12 (95% CI 1.24, 3.63), as was their relative risk ratio for future MI (95% CI 1.11, 4.06), after adjustment for age, baseline serum cholesterol level, parental MI, physical activity, time-dependent smoking, hypertension and diabetes. The median time from the first episode of clinical depression to first CAD event was 15 years, with a range of 1–44 years.”

“In the Women’s Ischaemia Syndrome Evaluation (WISE) study, 505 women referred for coronary angiography were followed for a mean of 4.9 years and completed the BDI [46]. Significantly increased mortality and cardiovascular events were found among women with elevated BDI scores, even after adjustment for age, cholesterol, stenosis score on angiography, smoking, diabetes, education, hyper-tension and body mass index (RR 3.1; 95% CI 1.5, 6.3). […] Further compelling evidence comes from a meta-analysis of 28 studies comprising almost 80 000 subjects [47], which demonstrated that, despite heterogeneity and differences in study quality, depression was consistently associated with increased risk of cardiovascular diseases in general, including stroke.”

“The preponderance of evidence strongly suggests that depression is a risk factor for CAD [coronary artery disease, US] development. […] In summary, it is fair to conclude that depression plays a significant role in CAD development, independent of conventional risk factors, and its adverse impact endures over time. The impact of depression on the risk of MI is probably similar to that of smoking [52]. […] Results of longitudinal cohort studies suggest that depression occurs before the onset of clinically significant CAD […] Recent brain imaging studies have indicated that lesions resulting from cerebrovascular insufficiency may lead to clinical depression [54, 55]. Depression may be a clinical manifestation of atherosclerotic lesions in certain areas of the brain that cause circulatory deficits. The depression then exacerbates the onset of CAD. The exact aetiological mechanism of depression and CAD development remains to be clarified.”

“Rutledge et al. [65] conducted a meta-analysis in 2006 in order to better understand the prevalence of depression among patients with CHF and the magnitude of the relationship between depression and clinical outcomes in the CHF population. They found that clinically significant depression was present in 21.5% of CHF patients, varying by the use of questionnaires versus diagnostic interview (33.6% and 19.3%, respectively). The combined results suggested higher rates of death and secondary events (RR 2.1; 95% CI 1.7, 2.6), and trends toward increased health care use and higher rates of hospitalisation and emergency room visits among depressed patients.”

“In the past 15 years, evidence has been provided that physically healthy subjects who suffer from depression are at increased risk for cardiovascular morbidity and mortality [1, 2], and that the occurrence of depression in patients with either unstable angina [3] or myocardial infarction (MI) [4] increases the risk for subsequent cardiac death. Moreover, epidemiological studies have proved that cardiovascular disease is a risk factor for depression, since the prevalence of depression in individuals with a recent MI or with coronary artery disease (CAD) or congestive heart failure has been found to be significantly higher than in the general population [5, 6]. […] findings suggest a bidirectional association between depression and cardiovascular disease. The pathophysiological mechanisms underlying this association are, at present, largely unclear, but several candidate mechanisms have been proposed.”

“Autonomic nervous system dysregulation is one of the most plausible candidate mechanisms underlying the relationship between depression and ischaemic heart disease, since changes of autonomic tone have been detected in both depression and cardiovascular disease [7], and autonomic imbalance […] has been found to lower the threshold for ventricular tachycardia, ventricular fibrillation and sudden cardiac death in patients with CAD [8, 9]. […] Imbalance between prothrombotic and antithrombotic mechanisms and endothelial dysfunction have [also] been suggested to contribute to the increased risk of cardiac events in both medically well patients with depression and depressed patients with CAD. Depression has been consistently associated with enhanced platelet activation […] evidence has accumulated that selective serotonin reuptake inhibitors (SSRIs) reduce platelet hyperreactivity and hyperaggregation of depressed patients [39, 40] and reduce the release of the platelet/endothelial biomarkers ß-thromboglobulin, P-selectin and E-selectin in depressed patients with acute CAD [41]. This may explain the efficacy of SSRIs in reducing the risk of mortality in depressed patients with CAD [42–44].”

“[S]everal studies have shown that reduced endothelium-dependent flow-mediated vasodilatation […] occurs in depressed adults with or without CAD [48–50]. Atherosclerosis with subsequent plaque rupture and thrombosis is the main determinant of ischaemic cardiovascular events, and atherosclerosis itself is now recognised to be fundamentally an inflammatory disease [56]. Since activation of inflammatory processes is common to both depression and cardiovascular disease, it would be reasonable to argue that the link between depression and ischaemic heart disease might be mediated by inflammation. Evidence has been provided that major depression is associated with a significant increase in circulating levels of both pro-inflammatory cytokines, such as IL-6 and TNF-a, and inflammatory acute phase proteins, especially the C-reactive protein (CRP) [57, 58], and that antidepressant treatment is able to normalise CRP levels irrespective of whether or not patients are clinically improved [59]. […] Vaccarino et al. [79] assessed specifically whether inflammation is the mechanism linking depression to ischaemic cardiac events and found that, in women with suspected coronary ischaemia, depression was associated with increased circulating levels of CRP and IL-6 and was a strong predictor of ischaemic cardiac events”

“Major depression has been consistently associated with hyperactivity of the HPA axis, with a consequent overstimulation of the sympathetic nervous system, which in turn results in increased circulating catecholamine levels and enhanced serum cortisol concentrations [68–70]. This may cause an imbalance in sympathetic and parasympathetic activity, which results in elevated heart rate and blood pressure, reduced HRV [heart rate variability], disruption of ventricular electrophysiology with increased risk of ventricular arrhythmias as well as an increased risk of atherosclerotic plaque rupture and acute coronary thrombosis. […] In addition, glucocorticoids mobilise free fatty acids, causing endothelial inflammation and excessive clotting, and are associated with hypertension, hypercholesterolaemia and glucose dysregulation [88, 89], which are risk factors for CAD.”

“Most of the literature on [the] comorbidity [between major depressive disorder (MDD) and coronary artery disease (CAD), US] has tended to favour the hypothesis of a causal effect of MDD on CAD, but reversed causality has also been suggested to contribute. Patients with severe CAD at baseline, and consequently a worse prognosis, may simply be more prone to report mood disturbances than less severely ill patients. Furthermore, in pre-morbid populations, insipid atherosclerosis in cerebral vessels may cause depressive symptoms before the onset of actual cardiac or cerebrovascular events, a variant of reverse causality known as the ‘vascular depression’ hypothesis [2]. To resolve causality, comorbidity between MDD and CAD has been addressed in longitudinal designs. Most prospective studies reported that clinical depression or depressive symptoms at baseline predicted higher incidence of heart disease at follow-up [1], which seems to favour the hypothesis of causal effects of MDD. We need to remind ourselves, however […] [that] [p]rospective associations do not necessarily equate causation. Higher incidence of CAD in depressed individuals may reflect the operation of common underlying factors on MDD and CAD that become manifest in mental health at an earlier stage than in cardiac health. […] [T]he association between MDD and CAD may be due to underlying genetic factors that lead to increased symptoms of anxiety and depression, but may also independently influence the atherosclerotic process. This phenomenon, where low-level biological variation has effects on multiple complex traits at the organ and behavioural level, is called genetic ‘pleiotropy’. If present in a time-lagged form, that is if genetic effects on MDD risk precede effects of the same genetic variants on CAD risk, this phenomenon can cause longitudinal correlations that mimic a causal effect of MDD.”

 

August 12, 2017 Posted by | Books, Cardiology, Genetics, Medicine, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

A few diabetes papers of interest

i. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

“Modest cognitive dysfunction is consistently reported in children and young adults with type 1 diabetes (T1D) (1). Mental efficiency, psychomotor speed, executive functioning, and intelligence quotient appear to be most affected (2); studies report effect sizes between 0.2 and 0.5 (small to modest) in children and adolescents (3) and between 0.4 and 0.8 (modest to large) in adults (2). Whether effect sizes continue to increase as those with T1D age, however, remains unknown.

A key issue not yet addressed is whether aging individuals with T1D have an increased risk of manifesting “clinically relevant cognitive impairment,” defined by comparing individual cognitive test scores to demographically appropriate normative means, as opposed to the more commonly investigated “cognitive dysfunction,” or between-group differences in cognitive test scores. Unlike the extensive literature examining cognitive impairment in type 2 diabetes, we know of only one prior study examining cognitive impairment in T1D (4). This early study reported a higher rate of clinically relevant cognitive impairment among children (10–18 years of age) diagnosed before compared with after age 6 years (24% vs. 6%, respectively) or a non-T1D cohort (6%).”

“This study tests the hypothesis that childhood-onset T1D is associated with an increased risk of developing clinically relevant cognitive impairment detectable by middle age. We compared cognitive test results between adults with and without T1D and used demographically appropriate published norms (1012) to determine whether participants met criteria for impairment for each test; aging and dementia studies have selected a score ≥1.5 SD worse than the norm on that test, corresponding to performance at or below the seventh percentile (13).”

“During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988.  […] The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education.”

“Having T1D was the only factor significantly associated with the between-group difference in clinically relevant cognitive impairment in our sample. Traditional risk factors for age-related cognitive impairment, in particular older age and high blood pressure (24), were not related to the between-group difference we observed. […] Similar to previous studies of younger adults with T1D (14,26), we found no relationship between the number of severe hypoglycemic episodes and cognitive impairment. Rather, we found that chronic hyperglycemia, via its associated vascular and metabolic changes, may have triggered structural changes in the brain that disrupt normal cognitive function.”

Just to be absolutely clear about these results: The type 1 diabetics they recruited in this study were on average not yet fifty years old, yet more than one in four of them were cognitively impaired to a clinically relevant degree. This is a huge effect. As they note later in the paper:

“Unlike previous reports of mild/modest cognitive dysfunction in young adults with T1D (1,2), we detected clinically relevant cognitive impairment in 28% of our middle-aged participants with T1D. This prevalence rate in our T1D cohort is comparable to the prevalence of mild cognitive impairment typically reported among community-dwelling adults aged 85 years and older (29%) (20).”

The type 1 diabetics included in the study had had diabetes for roughly a decade more than I have. And the number of cognitively impaired individuals in that sample corresponds roughly to what you find when you test random 85+ year-olds. Having type 1 diabetes is not good for your brain.

ii. Comment on Nunley et al. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

This one is a short comment to the above paper, below I’ve quoted ‘the meat’ of the comment:

“While the […] study provides us with important insights regarding cognitive impairment in adults with type 1 diabetes, we regret that depression has not been taken into account. A systematic review and meta-analysis published in 2014 identified significant objective cognitive impairment in adults and adolescents with depression regarding executive functioning, memory, and attention relative to control subjects (2). Moreover, depression is two times more common in adults with diabetes compared with those without this condition, regardless of type of diabetes (3). There is even evidence that the co-occurrence of diabetes and depression leads to additional health risks such as increased mortality and dementia (3,4); this might well apply to cognitive impairment as well. Furthermore, in people with diabetes, the presence of depression has been associated with the development of diabetes complications, such as retinopathy, and higher HbA1c values (3). These are exactly the diabetes-specific correlates that Nunley et al. (1) found.”

“We believe it is a missed opportunity that Nunley et al. (1) mainly focused on biological variables, such as hyperglycemia and microvascular disease, and did not take into account an emotional disorder widely represented among people with diabetes and closely linked to cognitive impairment. Even though severe or chronic cases of depression are likely to have been excluded in the group without type 1 diabetes based on exclusion criteria (1), data on the presence of depression (either measured through a diagnostic interview or by using a validated screening questionnaire) could have helped to interpret the present findings. […] Determining the role of depression in the relationship between cognitive impairment and type 1 diabetes is of significant importance. Treatment of depression might improve cognitive impairment both directly by alleviating cognitive depression symptoms and indirectly by improving treatment nonadherence and glycemic control, consequently lowering the risk of developing complications.”

iii. Prevalence of Diabetes and Diabetic Nephropathy in a Large U.S. Commercially Insured Pediatric Population, 2002–2013.

“[W]e identified 96,171 pediatric patients with diabetes and 3,161 pediatric patients with diabetic nephropathy during 2002–2013. We estimated prevalence of pediatric diabetes overall, by diabetes type, age, and sex, and prevalence of pediatric diabetic nephropathy overall, by age, sex, and diabetes type.”

“Although type 1 diabetes accounts for a majority of childhood and adolescent diabetes, type 2 diabetes is becoming more common with the increasing rate of childhood obesity and it is estimated that up to 45% of all new patients with diabetes in this age-group have type 2 diabetes (1,2). With the rising prevalence of diabetes in children, a rise in diabetes-related complications, such as nephropathy, is anticipated. Moreover, data suggest that the development of clinical macrovascular complications, neuropathy, and nephropathy may be especially rapid among patients with young-onset type 2 diabetes (age of onset <40 years) (36). However, the natural history of young patients with type 2 diabetes and resulting complications has not been well studied.”

I’m always interested in the identification mechanisms applied in papers like this one, and I’m a little confused about the high number of patients without prescriptions (almost one-third of patients); I sort of assume these patients do take (/are given) prescription drugs, but get them from sources not available to the researchers (parents get prescriptions for the antidiabetic drugs, and the researchers don’t have access to these data? Something like this..) but this is a bit unclear. The mechanism they employ in the paper is not perfect (no mechanism is), but it probably works:

“Patients who had one or more prescription(s) for insulin and no prescriptions for another antidiabetes medication were classified as having type 1 diabetes, while those who filled prescriptions for noninsulin antidiabetes medications were considered to have type 2 diabetes.”

When covering limitations of the paper, they observe incidentally in this context that:

“Klingensmith et al. (31) recently reported that in the initial month after diagnosis of type 2 diabetes around 30% of patients were treated with insulin only. Thus, we may have misclassified a small proportion of type 2 cases as type 1 diabetes or vice versa. Despite this, we found that 9% of patients had onset of type 2 diabetes at age <10 years, consistent with the findings of Klingensmith et al. (8%), but higher than reported by the SEARCH for Diabetes in Youth study (<3%) (31,32).”

Some more observations from the paper:

“There were 149,223 patients aged <18 years at first diagnosis of diabetes in the CCE database from 2002 through 2013. […] Type 1 diabetes accounted for a majority of the pediatric patients with diabetes (79%). Among these, 53% were male and 53% were aged 12 to <18 years at onset, while among patients with type 2 diabetes, 60% were female and 79% were aged 12 to <18 years at onset.”

“The overall annual prevalence of all diabetes increased from 1.86 to 2.82 per 1,000 during years 2002–2013; it increased on average by 9.5% per year from 2002 to 2006 and slowly increased by 0.6% after that […] The prevalence of type 1 diabetes increased from 1.48 to 2.32 per 1,000 during the study period (average increase of 8.5% per year from 2002 to 2006 and 1.4% after that; both P values <0.05). The prevalence of type 2 diabetes increased from 0.38 to 0.67 per 1,000 during 2002 through 2006 (average increase of 13.3% per year; P < 0.05) and then dropped from 0.56 to 0.49 per 1,000 during 2007 through 2013 (average decrease of 2.7% per year; P < 0.05). […] Prevalence of any diabetes increased by age, with the highest prevalence in patients aged 12 to <18 years (ranging from 3.47 to 5.71 per 1,000 from 2002 through 2013).” […] The annual prevalence of diabetes increased over the study period mainly because of increases in type 1 diabetes.”

“Dabelea et al. (8) reported, based on data from the SEARCH for Diabetes in Youth study, that the annual prevalence of type 1 diabetes increased from 1.48 to 1.93 per 1,000 and from 0.34 to 0.46 per 1,000 for type 2 diabetes from 2001 to 2009 in U.S. youth. In our study, the annual prevalence of type 1 diabetes was 1.48 per 1,000 in 2002 and 2.10 per 1,000 in 2009, which is close to their reported prevalence.”

“We identified 3,161 diabetic nephropathy cases. Among these, 1,509 cases (47.7%) were of specific diabetic nephropathy and 2,253 (71.3%) were classified as probable cases. […] The annual prevalence of diabetic nephropathy in pediatric patients with diabetes increased from 1.16 to 3.44% between 2002 and 2013; it increased by on average 25.7% per year from 2002 to 2005 and slowly increased by 4.6% after that (both P values <0.05).”

Do note that the relationship between nephropathy prevalence and diabetes prevalence is complicated and that you cannot just explain an increase in the prevalence of nephropathy over time easily by simply referring to an increased prevalence of diabetes during the same time period. This would in fact be a very wrong thing to do, in part but not only on account of the data structure employed in this study. One problem which is probably easy to understand is that if more children got diabetes but the same proportion of those new diabetics got nephropathy, the diabetes prevalence would go up but the diabetic nephropathy prevalence would remain fixed; when you calculate the diabetic nephropathy prevalence you implicitly condition on diabetes status. But this just scratches the surface of the issues you encounter when you try to link these variables, because the relationship between the two variables is complicated; there’s an age pattern to diabetes risk, with risk (incidence) increasing with age (up to a point, after which it falls – in most samples I’ve seen in the past peak incidence in pediatric populations is well below the age of 18). However diabetes prevalence increases monotonously with age as long as the age-specific death rate of diabetics is lower than the age-specific incidence, because diabetes is chronic, and then on top of that you have nephropathy-related variables, which display diabetes-related duration-dependence (meaning that although nephropathy risk is also increasing with age when you look at that variable in isolation, that age-risk relationship is confounded by diabetes duration – a type 1 diabetic at the age of 12 who’s had diabetes for 10 years has a higher risk of nephropathy than a 16-year old who developed diabetes the year before). When a newly diagnosed pediatric patient is included in the diabetes sample here this will actually decrease the nephropathy prevalence in the short run, but not in the long run, assuming no changes in diabetes treatment outcomes over time. This is because the probability that that individual has diabetes-related kidney problems as a newly diagnosed child is zero, so he or she will unquestionably only contribute to the denominator during the first years of illness (the situation in the middle-aged type 2 context is different; here you do sometimes have newly-diagnosed patients who have developed complications already). This is one reason why it would be quite wrong to say that increased diabetes prevalence in this sample is the reason why diabetic nephropathy is increasing as well. Unless the time period you look at is very long (e.g. you have a setting where you follow all individuals with a diagnosis until the age of 18), the impact of increasing prevalence of one condition may well be expected to have a negative impact on the estimated risk of associated conditions, if those associated conditions display duration-dependence (which all major diabetes complications do). A second factor supporting a default assumption of increasing incidence of diabetes leading to an expected decreasing rate of diabetes-related complications is of course the fact that treatment options have tended to increase over time, and especially if you take a long view (look back 30-40 years) the increase in treatment options and improved medical technology have lead to improved metabolic control and better outcomes.

That both variables grew over time might be taken to indicate that both more children got diabetes and that a larger proportion of this increased number of children with diabetes developed kidney problems, but this stuff is a lot more complicated than it might look and it’s in particular important to keep in mind that, say, the 2005 sample and the 2010 sample do not include the same individuals, although there’ll of course be some overlap; in age-stratified samples like this you always have some level of implicit continuous replacement, with newly diagnosed patients entering and replacing the 18-year olds who leave the sample. As long as prevalence is constant over time, associated outcome variables may be reasonably easy to interpret, but when you have dynamic samples as well as increasing prevalence over time it gets difficult to say much with any degree of certainty unless you crunch the numbers in a lot of detail (and it might be difficult even if you do that). A factor I didn’t mention above but which is of course also relevant is that you need to be careful about how to interpret prevalence rates when you look at complications with high mortality rates (and late-stage diabetic nephropathy is indeed a complication with high mortality); in such a situation improvements in treatment outcomes may have large effects on prevalence rates but no effect on incidence. Increased prevalence is not always bad news, sometimes it is good news indeed. Gleevec substantially increased the prevalence of CML.

In terms of the prevalence-outcomes (/complication risk) connection, there are also in my opinion reasons to assume that there may be multiple causal pathways between prevalence and outcomes. For example a very low prevalence of a condition in a given area may mean that fewer specialists are educated to take care of these patients than would be the case for an area with a higher prevalence, and this may translate into a more poorly developed care infrastructure. Greatly increasing prevalence may on the other hand lead to a lower level of care for all patients with the illness, not just the newly diagnosed ones, due to binding budget constraints and care rationing. And why might you have changes in prevalence; might they not sometimes rather be related to changes in diagnostic practices, rather than changes in the True* prevalence? If that’s the case, you might not be comparing apples to apples when you’re comparing the evolving complication rates. There are in my opinion many reasons to believe that the relationship between chronic conditions and the complication rates of these conditions is far from simple to model.

All this said, kidney problems in children with diabetes is still rare, compared to the numbers you see when you look at adult samples with longer diabetes duration. It’s also worth distinguishing between microalbuminuria and overt nephropathy; children rarely proceed to develop diabetes-related kidney failure, although poor metabolic control may mean that they do develop this complication later, in early adulthood. As they note in the paper:

“It has been reported that overt diabetic nephropathy and kidney failure caused by either type 1 or type 2 diabetes are uncommon during childhood or adolescence (24). In this study, the annual prevalence of diabetic nephropathy for all cases ranged from 1.16 to 3.44% in pediatric patients with diabetes and was extremely low in the whole pediatric population (range 2.15 to 9.70 per 100,000), confirming that diabetic nephropathy is a very uncommon condition in youth aged <18 years. We observed that the prevalence of diabetic nephropathy increased in both specific and unspecific cases before 2006, with a leveling off of the specific nephropathy cases after 2005, while the unspecific cases continued to increase.”

iv. Adherence to Oral Glucose-Lowering Therapies and Associations With 1-Year HbA1c: A Retrospective Cohort Analysis in a Large Primary Care Database.

“Between a third and a half of medicines prescribed for type 2 diabetes (T2DM), a condition in which multiple medications are used to control cardiovascular risk factors and blood glucose (1,2), are not taken as prescribed (36). However, estimates vary widely depending on the population being studied and the way in which adherence to recommended treatment is defined.”

“A number of previous studies have used retrospective databases of electronic health records to examine factors that might predict adherence. A recent large cohort database examined overall adherence to oral therapy for T2DM, taking into account changes of therapy. It concluded that overall adherence was 69%, with individuals newly started on treatment being significantly less likely to adhere (19).”

“The impact of continuing to take glucose-lowering medicines intermittently, but not as recommended, is unknown. Medication possession (expressed as a ratio of actual possession to expected possession), derived from prescribing records, has been identified as a valid adherence measure for people with diabetes (7). Previous studies have been limited to small populations in managed-care systems in the U.S. and focused on metformin and sulfonylurea oral glucose-lowering treatments (8,9). Further studies need to be carried out in larger groups of people that are more representative of the general population.

The Clinical Practice Research Database (CPRD) is a long established repository of routine clinical data from more than 13 million patients registered with primary care services in England. […] The Genetics of Diabetes and Audit Research Tayside Study (GoDARTS) database is derived from integrated health records in Scotland with primary care, pharmacy, and hospital data on 9,400 patients with diabetes. […] We conducted a retrospective cohort study using [these databases] to examine the prevalence of nonadherence to treatment for type 2 diabetes and investigate its potential impact on HbA1c reduction stratified by type of glucose-lowering medication.”

“In CPRD and GoDARTS, 13% and 15% of patients, respectively, were nonadherent. Proportions of nonadherent patients varied by the oral glucose-lowering treatment prescribed (range 8.6% [thiazolidinedione] to 18.8% [metformin]). Nonadherent, compared with adherent, patients had a smaller HbA1c reduction (0.4% [4.4 mmol/mol] and 0.46% [5.0 mmol/mol] for CPRD and GoDARTs, respectively). Difference in HbA1c response for adherent compared with nonadherent patients varied by drug (range 0.38% [4.1 mmol/mol] to 0.75% [8.2 mmol/mol] lower in adherent group). Decreasing levels of adherence were consistently associated with a smaller reduction in HbA1c.”

“These findings show an association between adherence to oral glucose-lowering treatment, measured by the proportion of medication obtained on prescription over 1 year, and the corresponding decrement in HbA1c, in a population of patients newly starting treatment and continuing to collect prescriptions. The association is consistent across all commonly used oral glucose-lowering therapies, and the findings are consistent between the two data sets examined, CPRD and GoDARTS. Nonadherent patients, taking on average <80% of the intended medication, had about half the expected reduction in HbA1c. […] Reduced medication adherence for commonly used glucose-lowering therapies among patients persisting with treatment is associated with smaller HbA1c reductions compared with those taking treatment as recommended. Differences observed in HbA1c responses to glucose-lowering treatments may be explained in part by their intermittent use.”

“Low medication adherence is related to increased mortality (20). The mean difference in HbA1c between patients with MPR <80% and ≥80% is between 0.37% and 0.55% (4 mmol/mol and 6 mmol/mol), equivalent to up to a 10% reduction in death or an 18% reduction in diabetes complications (21).”

v. Health Care Transition in Young Adults With Type 1 Diabetes: Perspectives of Adult Endocrinologists in the U.S.

“Empiric data are limited on best practices in transition care, especially in the U.S. (10,1316). Prior research, largely from the patient perspective, has highlighted challenges in the transition process, including gaps in care (13,1719); suboptimal pediatric transition preparation (13,20); increased post-transition hospitalizations (21); and patient dissatisfaction with the transition experience (13,1719). […] Young adults with type 1 diabetes transitioning from pediatric to adult care are at risk for adverse outcomes. Our objective was to describe experiences, resources, and barriers reported by a national sample of adult endocrinologists receiving and caring for young adults with type 1 diabetes.”

“We received responses from 536 of 4,214 endocrinologists (response rate 13%); 418 surveys met the eligibility criteria. Respondents (57% male, 79% Caucasian) represented 47 states; 64% had been practicing >10 years and 42% worked at an academic center. Only 36% of respondents reported often/always reviewing pediatric records and 11% reported receiving summaries for transitioning young adults with type 1 diabetes, although >70% felt that these activities were important for patient care.”

“A number of studies document deficiencies in provider hand-offs across other chronic conditions and point to the broader relevance of our findings. For example, in two studies of inflammatory bowel disease, adult gastroenterologists reported inadequacies in young adult transition preparation (31) and infrequent receipt of medical histories from pediatric providers (32). In a study of adult specialists caring for young adults with a variety of chronic diseases (33), more than half reported that they had no contact with the pediatric specialists.

Importantly, more than half of the endocrinologists in our study reported a need for increased access to mental health referrals for young adult patients with type 1 diabetes, particularly in nonacademic settings. Report of barriers to care was highest for patient scenarios involving mental health issues, and endocrinologists without easy access to mental health referrals were significantly more likely to report barriers to diabetes management for young adults with psychiatric comorbidities such as depression, substance abuse, and eating disorders.”

“Prior research (34,35) has uncovered the lack of mental health resources in diabetes care. In the large cross-national Diabetes Attitudes, Wishes and Needs (DAWN) study (36) […] diabetes providers often reported not having the resources to manage mental health problems; half of specialist diabetes physicians felt unable to provide psychiatric support for patients and one-third did not have ready access to outside expertise in emotional or psychiatric matters. Our results, which resonate with the DAWN findings, are particularly concerning in light of the vulnerability of young adults with type 1 diabetes for adverse medical and mental health outcomes (4,34,37,38). […] In a recent report from the Mental Health Issues of Diabetes conference (35), which focused on type 1 diabetes, a major observation included the lack of trained mental health professionals, both in academic centers and the community, who are knowledgeable about the mental health issues germane to diabetes.”

August 3, 2017 Posted by | Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Psychology, Statistics, Studies | Leave a comment

Beyond Significance Testing (III)

There are many ways to misinterpret significance tests, and this book spends quite a bit of time and effort on these kinds of issues. I decided to include in this post quite a few quotes from chapter 4 of the book, which deals with these topics in some detail. I also included some notes on effect sizes.

“[P] < .05 means that the likelihood of the data or results even more extreme given random sampling under the null hypothesis is < .05, assuming that all distributional requirements of the test statistic are satisfied and there are no other sources of error variance. […] the odds-against-chance fallacy […] [is] the false belief that p indicates the probability that a result happened by sampling error; thus, p < .05 says that there is less than a 5% likelihood that a particular finding is due to chance. There is a related misconception i call the filter myth, which says that p values sort results into two categories, those that are a result of “chance” (H0 not rejected) and others that are due to “real” effects (H0 rejected). These beliefs are wrong […] When p is calculated, it is already assumed that H0 is true, so the probability that sampling error is the only explanation is already taken to be 1.00. It is thus illogical to view p as measuring the likelihood of sampling error. […] There is no such thing as a statistical technique that determines the probability that various causal factors, including sampling error, acted on a particular result.

Most psychology students and professors may endorse the local Type I error fallacy [which is] the mistaken belief that p < .05 given α = .05 means that the likelihood that the decision just taken to reject H0 is a type I error is less than 5%. […] p values from statistical tests are conditional probabilities of data, so they do not apply to any specific decision to reject H0. This is because any particular decision to do so is either right or wrong, so no probability is associated with it (other than 0 or 1.0). Only with sufficient replication could one determine whether a decision to reject H0 in a particular study was correct. […] the valid research hypothesis fallacy […] refers to the false belief that the probability that H1 is true is > .95, given p < .05. The complement of p is a probability, but 1 – p is just the probability of getting a result even less extreme under H0 than the one actually found. This fallacy is endorsed by most psychology students and professors”.

“[S]everal different false conclusions may be reached after deciding to reject or fail to reject H0. […] the magnitude fallacy is the false belief that low p values indicate large effects. […] p values are confounded measures of effect size and sample size […]. Thus, effects of trivial magnitude need only a large enough sample to be statistically significant. […] the zero fallacy […] is the mistaken belief that the failure to reject a nil hypothesis means that the population effect size is zero. Maybe it is, but you cannot tell based on a result in one sample, especially if power is low. […] The equivalence fallacy occurs when the failure to reject H0: µ1 = µ2 is interpreted as saying that the populations are equivalent. This is wrong because even if µ1 = µ2, distributions can differ in other ways, such as variability or distribution shape.”

“[T]he reification fallacy is the faulty belief that failure to replicate a result is the failure to make the same decision about H0 across studies […]. In this view, a result is not considered replicated if H0 is rejected in the first study but not in the second study. This sophism ignores sample size, effect size, and power across different studies. […] The sanctification fallacy refers to dichotomous thinking about continuous p values. […] Differences between results that are “significant” versus “not significant” by close margins, such as p = .03 versus p = .07 when α = .05, are themselves often not statistically significant. That is, relatively large changes in p can correspond to small, nonsignificant changes in the underlying variable (Gelman & Stern, 2006). […] Classical parametric statistical tests are not robust against outliers or violations of distributional assumptions, especially in small, unrepresentative samples. But many researchers believe just the opposite, which is the robustness fallacy. […] most researchers do not provide evidence about whether distributional or other assumptions are met”.

“Many [of the above] fallacies involve wishful thinking about things that researchers really want to know. These include the probability that H0 or H1 is true, the likelihood of replication, and the chance that a particular decision to reject H0 is wrong. Alas, statistical tests tell us only the conditional probability of the data. […] But there is [however] a method that can tell us what we want to know. It is not a statistical technique; rather, it is good, old-fashioned replication, which is also the best way to deal with the problem of sampling error. […] Statistical significance provides even in the best case nothing more than low-level support for the existence of an effect, relation, or difference. That best case occurs when researchers estimate a priori power, specify the correct construct definitions and operationalizations, work with random or at least representative samples, analyze highly reliable scores in distributions that respect test assumptions, control other major sources of imprecision besides sampling error, and test plausible null hypotheses. In this idyllic scenario, p values from statistical tests may be reasonably accurate and potentially meaningful, if they are not misinterpreted. […] The capability of significance tests to address the dichotomous question of whether effects, relations, or differences are greater than expected levels of sampling error may be useful in some new research areas. Due to the many limitations of statistical tests, this period of usefulness should be brief. Given evidence that an effect exists, the next steps should involve estimation of its magnitude and evaluation of its substantive significance, both of which are beyond what significance testing can tell us. […] It should be a hallmark of a maturing research area that significance testing is not the primary inference method.”

“[An] effect size [is] a quantitative reflection of the magnitude of some phenomenon used for the sake of addressing a specific research question. In this sense, an effect size is a statistic (in samples) or parameter (in populations) with a purpose, that of quantifying a phenomenon of interest. more specific definitions may depend on study design. […] cause size refers to the independent variable and specifically to the amount of change in it that produces a given effect on the dependent variable. A related idea is that of causal efficacy, or the ratio of effect size to the size of its cause. The greater the causal efficacy, the more that a given change on an independent variable results in proportionally bigger changes on the dependent variable. The idea of cause size is most relevant when the factor is experimental and its levels are quantitative. […] An effect size measure […] is a named expression that maps data, statistics, or parameters onto a quantity that represents the magnitude of the phenomenon of interest. This expression connects dimensions or generalized units that are abstractions of variables of interest with a specific operationalization of those units.”

“A good effect size measure has the [following properties:] […] 1. Its scale (metric) should be appropriate for the research question. […] 2. It should be independent of sample size. […] 3. As a point estimate, an effect size should have good statistical properties; that is, it should be unbiased, consistent […], and efficient […]. 4. The effect size [should be] reported with a confidence interval. […] Not all effect size measures […] have all the properties just listed. But it is possible to report multiple effect sizes that address the same question in order to improve the communication of the results.” 

“Examples of outcomes with meaningful metrics include salaries in dollars and post-treatment survival time in years. means or contrasts for variables with meaningful units are unstandardized effect sizes that can be directly interpreted. […] In medical research, physical measurements with meaningful metrics are often available. […] But in psychological research there are typically no “natural” units for abstract, nonphysical constructs such as intelligence, scholastic achievement, or self-concept. […] Therefore, metrics in psychological research are often arbitrary instead of meaningful. An example is the total score for a set of true-false items. Because responses can be coded with any two different numbers, the total is arbitrary. Standard scores such as percentiles and normal deviates are arbitrary, too […] Standardized effect sizes can be computed for results expressed in arbitrary metrics. Such effect sizes can also be directly compared across studies where outcomes have different scales. this is because standardized effect sizes are based on units that have a common meaning regardless of the original metric.”

“1. It is better to report unstandardized effect sizes for outcomes with meaningful metrics. This is because the original scale is lost when results are standardized. 2. Unstandardized effect sizes are best for comparing results across different samples measured on the same outcomes. […] 3. Standardized effect sizes are better for comparing conceptually similar results based on different units of measure. […] 4. Standardized effect sizes are affected by the corresponding unstandardized effect sizes plus characteristics of the study, including its design […], whether factors are fixed or random, the extent of error variance, and sample base rates. This means that standardized effect sizes are less directly comparable over studies that differ in their designs or samples. […] 5. There is no such thing as T-shirt effect sizes (Lenth, 2006– 2009) that classify standardized effect sizes as “small,” “medium,” or “large” and apply over all research areas. This is because what is considered a large effect in one area may be seen as small or trivial in another. […] 6. There is usually no way to directly translate standardized effect sizes into implications for substantive significance. […] It is standardized effect sizes from sets of related studies that are analyzed in most meta analyses.”

July 16, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Personality Puzzle (IV)

Below I have added a few quotes from the last 100 pages of the book. This will be my last post about the book.

“Carol Dweck and her colleagues claim that two […] kinds of goals are […] important […]. One kind she calls judgment goals. Judgment, in this context, refers to seeking to judge or validate an attribute in oneself. For example, you might have the goal of convincing yourself that you are smart, beautiful, or popular. The other kind she calls development goals. A development goal is the desire to actually improve oneself, to become smarter, more beautiful, or more popular. […] From the perspective of Dweck’s theory, these two kinds of goals are important in many areas of life because they produce different reactions to failure, and everybody fails sometimes. A person with a development goal will respond to failure with what Dweck calls a mastery-oriented pattern, in which she tries even harder the next time. […] In contrast, a person with a judgment goal responds to failure with what Dweck calls the helpless pattern: Rather than try harder, this individual simply concludes, “I can’t do it,” and gives up. Of course, that only guarantees more failure in the future. […] Dweck believes [the goals] originate in different kinds of implicit theories about the nature of the world […] Some people hold what Dweck calls entity theories, and believe that personal qualities such as intelligence and ability are unchangeable, leading them to respond helplessly to any indication that they do not have what it takes. Other people hold incremental theories, believing that intelligence and ability can change with time and experience. Their goals, therefore, involve not only proving their competence but increasing it.”

(I should probably add here that any sort of empirical validation of those theories and their consequences are, aside from a brief discussion of the results of a few (likely weak, low-powered) studies, completely absent in the book, but this kind of stuff might even so be worth having in mind, which was why I included this quote in my coverage – US).

“A large amount of research suggests that low self-esteem […] is correlated with outcomes such as dissatisfaction with life, hopelessness, and depression […] as well as loneliness […] Declines in self-esteem also appear to cause outcomes including depression, lower satisfaction with relationships, and lower satisfaction with one’s career […] Your self-esteem tends to suffer when you have failed in the eyes of your social group […] This drop in self-esteem may be a warning about possible rejection or even social ostracism — which, for our distant ancestors, could literally be fatal — and motivate you to restore your reputation. High self-esteem, by contrast, may indicate success and acceptance. Attempts to bolster self-esteem can backfire. […] People who self-enhance — who think they are better than the other people who know them think they are — can run into problems in relations with others, mental health, and adjustment […] Narcissism is associated with high self-esteem that is brittle and unstable because it is unrealistic […], and unstable self-esteem may be worse than low self-esteem […] The bottom line is that promoting psychological health requires something more complex than simply trying to make everybody feel better about themselves […]. The best way to raise self-esteem is through accomplishments that increase it legitimately […]. The most important aspect of your opinion of yourself is not whether it is good or bad, but the degree to which it is accurate.”

“An old theory suggested that if you repeated something over and over in your mind, such rehearsal was sufficient to move the information into long-term memory (LTM), or permanent memory storage. Later research showed that this idea is not quite correct. The best way to get information into LTM, it turns out, is not just to repeat it, but to really think about it (a process called elaboration). The longer and more complex the processing that a piece of information receives, the more likely it is to get transferred into LTM”.

“Concerning mental health, aspects of personality can become so extreme as to cause serious problems. When this happens, psychologists begin to speak of personality disorders […] Personality disorders have five general characteristics. They are (1) unusual and, (2) by definition, tend to cause problems. In addition, most but not quite all personality disorders (3) affect social relations and (4) are stable over time. Finally, (5) in some cases, the person who has a personality disorder may see it not as a disorder at all, but a basic part of who he or she is. […] personality disorders can be ego-syntonic, which means the people who have them do not think anything is wrong. People who suffer from other kinds of mental disorder generally experience their symptoms of confusion, depression, or anxiety as ego-dystonic afflictions of which they would like to be cured. For a surprising number of people with personality disorders, in contrast, their symptoms feel like normal and even valued aspects of who they are. Individuals with the attributes of the antisocial or narcissistic personality disorders, in particular, typically do not think they have a problem.”

[One side-note: It’s important to be aware of the fact that not all people who display unusual behavioral patterns which are causing them problems necessarily suffer from a personality disorder. Other categorization schemes also exist. Autism is for example not categorized as a personality disorder, but is rather considered to be a (neuro)developmental disorder. Funder does not go into this kind of stuff in his book but I thought it might be worth mentioning here – US]

“Some people are more honest than others, but when deceit and manipulation become core aspects of an individual’s way of dealing with the world, he may be diagnosed with antisocial personality disorder. […] People with this disorder are impulsive, and engage in risky behaviors […] They typically are irritable, aggressive, and irresponsible. The damage they do to others bothers them not one whit; they rationalize […] that life is unfair; the world is full of suckers; and if you don’t take what you want whenever you can, then you are a sucker too. […] A wide variety of negative outcomes may accompany this disorder […] Antisocial personality disorder is sometimes confused with the trait of psychopathy […] but it’s importantly different […] Psychopaths are emotionally cold, they disregard social norms, and they are manipulative and often cunning. Most psychopaths meet the criteria for antisocial personality disorder, but the reverse is not true.”

“From day to day with different people, and over time with the same people, most individuals feel and act pretty consistently. […] Predictability makes it possible to deal with others in a reasonable way, and gives each of us a sense of individual identity. But some people are less consistent than others […] borderline personality disorder […] is characterized by unstable and confused behavior, a poor sense of identity, and patterns of self-harm […] Their chaotic thoughts, emotions, and behaviors make persons suffering from this disorder very difficult for others to “read” […] Borderline personality disorder (BPD) entails so many problems for the affected person that nobody doubts that it is, at the very least, on the “borderline” with severe psychopathology.5 Its hallmark is emotional instability. […] All of the personality disorders are rather mixed bags of indicators, and BPD may be the most mixed of all. It is difficult to find a coherent, common thread among its characteristics […] Some psychologists […] have suggested that this [personality disorder] category is too diffuse and should be abandoned.”

“[T]he modern research literature on personality disorders has come close to consensus about one conclusion: There is no sharp dividing line between psychopathology and normal variation (L. A. Clark & Watson, 1999a; Furr & Funder, 1998; Hong & Paunonen, 2011; Krueger & Eaton, 2010; Krueger & Tackett, 2003; B. P. O’Connor, 2002; Trull & Durrett, 2005).”

“Accurate self-knowledge has long been considered a hallmark of mental health […] The process for gaining accurate self-knowledge is outlined by the Realistic Accuracy Model […] according to RAM, one can gain accurate knowledge of anyone’s personality through a four-stage process. First, the person must do something relevant to the trait being judged; second, the information must be available to the judge; third, the judge must detect this information; and fourth, the judge must utilize the information correctly. This model was initially developed to explain the accuracy of judgments of other people. In an important sense, though, you are just one of the people you happen to know, and, to some degree, you come to know yourself the same way you find out about anybody else — by observing what you do and trying to draw appropriate conclusions”.

“[P]ersonality is not just something you have; it is also something you do. The unique aspects of what you do comprise the procedural self, and your knowledge of this self typically takes the form of procedural knowledge. […] The procedural self is made up of the behaviors through which you express who you think you are, generally without knowing you are doing so […]. Like riding a bicycle, the working of the procedural self is automatic and not very accessible to conscious awareness.”

July 14, 2017 Posted by | Books, Psychology | Leave a comment