Econstudentlog

Delusion and Self-Deception – Affective and Motivational Influences on Belief Formation (II)

[A brief note before moving on to the book review: I was just notified by wordpress when publishing this post that I have now posted 2001 posts on this blog. It would of course have made more sense to add a remark in the post which came before this one, but I didn’t notice and I had no idea I was that close to the 2000-mark. I don’t know if the number is correct; it somehow seems ‘too high’ to me, but I’m not going to count the posts in order to figure out if the number is correct. Even if I did try to do that I know that I have deleted a lot of posts over the years, so there are a lot fewer posts than that available in the archives, and so I wouldn’t be able to tell anyway from the information which is available to me now. According to the summary information displayed on my dashboard right now, there are 1304 posts in the archives. Anyway, I thought I should mention this here.]

I finished the book.

There were a couple of really nice chapters in the first half, but there was a lot of not-great stuff as well; the average quality and the variance of the quality of the material included in the second half was reasonably similar. Some parts of the second half were quite helpful in terms of making better sense of some of the stuff the poorer chapters in the first part dealt with. I gave the book two stars on goodreads. I must admit that I think the editor might have done a better job; I was very annoyed by the inclusion of multiple articles in the first part of the book which either completely neglected to define terms used in the text, or described them poorly. A couple of times throughout the book I came across an explanation of a term/distinction which had been used in multiple previous chapters where I’d earlier on been sort of semi-guessing what they meant when they talked about this stuff, and suddenly I realized that it was really quite easy to explain, only the previous authors hadn’t done that (/been able to do that?). You could argue that part of the problem is that some of the contributions were not well written and that you can’t blame the editors for that, but to me it seems problematic to include in a book like this some chapters reasonably early in the text which assume you know all about X, and then multiple chapters later you include in the work a chapter written by someone who (more reasonably) assumes that people may perhaps have no clue what X refers to – it might have been smart to include the latter chapter a bit sooner in the coverage..

The book is published by Psychology Press, but some chapters are written by philosophers rather than psychologists. Although not all of the coverage is purely conceptual, there’s a lot of that stuff in this book and the amount of empirical content is not very impressive, at least not if you don’t think much of elaborate descriptions of anecdotes and studies conducted on 6 people or something like that; I recall this having been a big issue for me previously when looking at some related neuroscience (there’s some neuroscience in the book, as also indicated in the previous post), some of which is occasionally mentioned in the work (specifically the research done by Ramachandran & Sacks).

Part of the low rating on my part is surely due to unmet expectations; I’d expected the book to cover in some detail what might be termed ‘common self-deception’ in ordinary people; the type of self-deception normal people engage in all the time. There’s almost none of that in the book, though there are a few comments on that topic here and there. Most book chapters focus mainly on people with specific delusions – especially the Capgras delusion, there are a lot of chapters talking about that one – and then talk about what’s the best way to model this disorder; in the context of the various (conceptual) models proposed, they then occasionally include considerations as to whether such patients are best thought of as self-deceiving or not, and what we might actually mean by this. Although this is interesting enough, I didn’t pick up the book because I thought that was what was inside it. That some expectations were unmet was not the only reason why I mentally subtracted a bit from the overall rating; I have a perhaps irrational tendency to become annoyed when people writing books like these don’t seem to know (relevant) stuff I do, and in this case an author made the ‘mistake’ of talking about Othello syndrome (basically: ‘delusional jealousy’) without seeming to be at all familiar with relevant ethological research such as what’s included in texts like this one. I thought the lack of familiarity with this field meant that the theoretical notions included in that part of the coverage completely overlooked some really big variables and meaningful approaches to how to conceptualize this syndrome. In all fairness it should be noted that another author elsewhere in the book actually does seem familiar with at least some parts of this research and in fact explicitly refers to it in the text.

Although there’s a lot of what might be termed ‘conceptual coverage’ in the book it’s not like there isn’t some empirical stuff as well; to take an example, the book has a chapter called ‘Emotion, Cognition, and Belief – Findings From Cognitive Neuroscience’. To the extent that the coverage is empirical it’s mostly asking questions such as what happens to/in the brains of people with or without specific brain lesions known to be related to specific delusions, and how we can know this/have tested this. This is interesting enough; there’s some work presented suggesting for example that the risk of development of a specific type of quite common stroke-related delusional belief where the patient denies that he’s suffering from paralysis – even if he obviously can’t move his arm or leg – is related to which part of the brain is damaged (see below).

I’ve added some observations from a few of the chapters I did not cover in the first post below. Although I may sound critical in my comments above, I should note that there was a reason why I finished the book instead of giving up on it; there were quite a few interesting observations along the way, many of which I found myself unable to include in the coverage of the book here for various reasons.

“Until recently, many delusions were widely regarded as having a motivational psychogenesis. That is, delusions were viewed as being motivated and their formation and maintenance seen as attributable to the psychological benefits they provided to deluded individuals. […] This explicitly motivational formulation, which explains a delusory belief in terms of the psychological benefits it confers, is consistent with a long tradition in psychology, the psychodynamic tradition. […] The key notion in psychoanalytic [/…-dynamic] accounts is that delusions are viewed as having a palliative function; they represent an attempt (however misguided) to relieve pain, tension, and distress. […] Motivational accounts of delusions can be generally distinguished from another major explanatory class — that involving the notion of defect or deficit […] Theories in this second class view delusions as the consequence of fundamental cognitive or perceptual anomalies ranging from complete breakdowns in certain crucial elements of cognitive–perceptual machinery […] to milder dysfunctions involving the distorted operation of particular processes […] There is little doubt that the edifice of psychodynamic thought is replete with theorizing that is at once outrageously presumptive and outlandishly speculative. [I liked this sentence… ] […] For the purposes of this chapter, [however,] the core insight is that motives (conscious or otherwise) are important causal forces doxastically (doxastic = of or relating to belief). Psychoanalysis, of course, contains other conceptual elements and theoretical postulates that we might not want to endorse or consider”

“Self-deception is a notoriously slippery notion that has eluded definitional consensus. Sackeim and Gur (1978) provided what is arguably the most widely accepted characterization in the psychological literature, claiming that self-deception consists in an individual holding two contradictory beliefs simultaneously; the individual, moreover, is aware of only one of these beliefs and is motivated to remain unaware of the other. This kind of conceptualization courts philosophical controversy in that it entails what are known as the “static” and “dynamic” paradoxes of self-deception […]. The static paradox consists in a self-deceived person being simultaneously in two contradictory states: the states of believing and disbelieving a particular proposition. The dynamic paradox arises out of the fact that in order for a person to engage in self-deception, he or she must know what he or she is doing; yet, in order for the project to work, he or she must not know what he or she is doing. Mele (this volume) offers a “deflationary” account of self-deception that skirts these paradoxes. In his account, self-deception occurs when a “desire that p” contributes to a “belief that p.” Mele outlines how this can happen unparadoxically (via such phenomena as negative and positive misinterpretation, selective focusing, and selective evidence-gathering). Regarding self-deception’s relationship to the notion of delusion, the two terms have been variously used as synonyms […], as qualitatively similar concepts that differ quantitatively […], and as quite distinct, if overlapping, concepts […] We argue that it is indeed useful to view delusion and self-deception as distinct concepts that intersect or overlap. […] Essentially, we view delusion as connoting both a dearth of evidential justification and an element of day-to-day dysfunction. A person is deluded when he or she has come to hold a particular belief with a degree of firmness that is utterly unwarranted by the evidence at hand and that jeopardizes their everyday functioning.[13] [Note that at the time they wrote the book, official diagnostic manuals such as the DSM-IV did not include ‘everyday functioning’ as a diagnostic criteria; so the criteria proposed are in some sense more restrictive than the ‘official ones’ applied at that time. I have no idea what the current criteria are, but I also don’t much care – US]. Self-deception, on the other hand, we view (with Mele) as paradigmatically motivated. Self-deceptive beliefs may or may not contradict reality, and they may or may not disrupt daily living. What is important is that they are not formed out of a wholly dispassionate quest for the truth, but rather are somehow influenced or biased by the prevailing motivational state of the individual concerned. […] Theoretically, at least, each may occur in isolation. Thus, some delusions may arise without self-deception via processes that are not remotely motivated. […] Conversely, certain instances of self-deception may not sufficiently disrupt functioning to warrant the label delusion. […] [Common] self-serving tendencies do not ordinarily merit usage of the term delusion.”

“A two-factor account [of delusions] offers distinct answers to […] two questions in terms of two departures from normality. The first factor explains why the false proposition seemed a somewhat salient and credible hypothesis or why it was initially adopted as a belief. The second factor explains why the proposition is not subsequently rejected.”

Another author in the book framed the disctinction as being one between the content of the delusion and the maintenance of the delusion. The second factor has been proposed to relate to belief evaluation mechanisms in some way or other, but most authors aren’t very specific when talking about this stuff and most of that coverage seemed to me very speculative. I won’t talk much more about these accounts/conceptual models here, but I will note that they spend a lot of pages talking about the two-factor accounts in the book. It seems to be difficult to explain the content and the maintenance of delusions, applying the terminology proposed in the other chapter, using only a single factor, which is why such models have been developed:

“The argument for a second factor in the etiology of delusions is that, both normally and normatively, the first factor is not sufficient to explain the delusion. The first factor prompts an apparently salient and somewhat credible hypothesis or candidate belief but the hypothesis or candidate belief normally could be, and normatively should be, rejected. Even if the first factor explains why the hypothesis is initially adopted as a belief, it does not explain the delusion because it does not explain why the belief is tenaciously maintained […] We need a second factor to answer the question of why the patient does not reject the belief.”

It should be noted in the context of the distinction between deficit accounts and motivational accounts above that two-factor accounts do not include motivational factors, being purely deficit accounts, although one author in the book suggests that it might make sense to try to combine the models (a thought that also occurred to me while reading the book). Most accounts of this stuff were as mentioned speculative, but in one context – anosognosia in stroke victims – it seems there’s actually some relevant knowledge and data:

“Patients with anosognosia fail to acknowledge, or even outright deny, their impairment or illness […] In this chapter, we shall be concerned with anosognosia for hemiplegia (paralysis of one side of the body) or, more generally, for motor impairments. A patient whose arm or leg is paralyzed or weak following a stroke may deny the weakness in response to questions like, “Is there anything wrong with your arm or leg? Is it weak, paralyzed or numb?” […], and they may continue to deny the impairment even when it has been demonstrated. For example, the examiner may ask the patient to raise both arms and then demonstrate to the patient that one arm is not raised as high as the other. […] According to the neuropsychological version of the two-factor theory, the second factor, which does its work after the generation of the delusional hypothesis, candidate belief, or initially adopted belief, is a deficit in the cognitive mechanisms responsible for belief evaluation and revision. No very detailed account of this second deficit has yet been provided […] Although the second deficit is poorly specified in terms of cognitive function, there are some suggestions about its neural basis. For example, following a right-hemisphere stroke, patients may deny ownership of their paralyzed left-side limbs […]. The fact that patients with somatoparaphrenia [denial of ownership: The patient may basically for example believe that ‘this is not my arm’, even if it is in fact his arm…US] generally have intact left hemispheres suggests that the second deficit results from right-hemisphere damage, and other evidence supports this suggestion.[4] […] Recent studies of patients in the first 10 days following a stroke suggest a rate of occurrence for anosognosia of 17–21% […] and 21–42% for right-hemisphere patients […] Studies also suggest a rate of occurrence for unilateral neglect [“Patients with unilateral neglect fail to respond to stimuli presented on the side opposite to their lesion”] of 23% […] and 32–42% among right-hemisphere patients [These kinds of delusional beliefs are as illustrated by these numbers not absurd notions which pop up in one in a million patients; they are really common in the stroke contextIt should be mentioned here that in most patients the delusional beliefs seem to resolve over time: “by comparison with anosognosia in the first few days following a stroke, persisting anosognosia is relatively rare.”]

“One way to reinterpret delusional subjects is to say that we’ve misidentified the content of the problematic belief. For example, we might say that rather than believing that his wife has been replaced by an impostor, the victim of the Capgras delusion believes that it is, in some respects, as if his wife has been replaced by an impostor. Another is to say that we’ve misidentified the attitude that the delusional subject bears to the content of the delusion. For example, Gregory Currie and coauthors have suggested that rather than believing that his wife has been replaced by an impostor, we should say that the victim of the Capgras delusion merely imagines that his wife has been replaced by an impostor. […] I want to suggest that […] we ought to say that delusional subjects don’t straightforwardly believe the contents of their delusions or straightforwardly imagine them. Instead, they bear some intermediate attitude — we might call it “bimagination” — with some of the distinctive features of believing and some of the distinctive features of imagining. […] People with inconsistent beliefs don’t just infer everything, and it often happens that they find themselves failing to believe some of the consequences of what they believe. […] Closure takes cognitive work, and some of the consequences of our beliefs are easier to spot than others. Here is an explanation of this fact: Not every belief-type representation is equally tied to every other. There are coordination constraints between belief-type representations. These sorts of connections encourage consistency among belief-type representations and the elaboration of belief-type representations or the production of new ones that represent the consequences of things represented in our various beliefs. But these connections are not equally strong everywhere. Some of our beliefs are very closely tied to one another, so the elimination of inconsistency and drawing of inferences comes easily or automatically. However, there are also pairs of belief-type representations that are not so closely tied to one another, where elimination of inconsistency and drawing of inferences is difficult and/or unlikely. […] the belief role isn’t monolithic. Within belief, there’s variation in the sort of behavior-guiding role that’s played by different beliefs, and there’s variation in the sort of inference-generating role that’s played by different beliefs. […]

Here, I think, is the moral: The belief role and the imagination role are a lot more complicated and a lot less unified than we might have thought. It’s not just a matter of a given representation being hooked up like a belief or hooked up like an imagining. A given belief-type representation will have a whole range of different connections to different behavior-planning mechanisms (or to different bits of the one mechanism, or different kinds of connections to the same bit of the same mechanism, or…) and a whole range of different kinds of connections to different representations of various types. There are no necessary connections between these various connections; it’s not the case that anything that’s got one element of a certain package has also got to have all of the rest because we see a variety of mix-and-match patterns even within belief. No belief-type representation plays the whole stereotypical belief role — regulating all behavior all the time and being equally and perfectly coordinated with all of our other beliefs. The different bits of the stereotypical role — for example, regulating this bit of behavior in these circumstances and combining with these sorts of beliefs to generate inference — are separable. Thus, there seems to be no principled reason to think that we can’t get a spectrum of cases, from clear, totally non-belief-like imaginings to clear, full-blooded, paradigmatic beliefs, with intermediate, hard-to-classify states in the middle.”

“If we think that a certain sort of evidence responsiveness is essential to belief, then, in many cases, we’ll be reluctant to say that delusional subjects genuinely believe the contents of their delusions. Thus, we’ll be uncomfortable with characterizing delusions as genuine beliefs. Another respect in which delusions are puzzling, which makes categorizing delusions as beliefs problematic, is that delusions, when compared to other beliefs, seem to have an importantly circumscribed role in subjects’ cognitive economies. […] The first way in which the cognitive role of delusions is circumscribed relative to that of paradigmatic beliefs is inferential. Delusional subjects often do not draw the sorts of inferences that we might expect from someone who believed the content of his or her delusion. A subject with the Capgras delusion, for example, who believes that his wife has been replaced by a duplicate, is likely not to adopt an overall worldview according to which it makes sense that his spouse should have been replaced by an impostor. […] Another respect in which the role of delusional beliefs is circumscribed is behavioral. Delusional subjects fail, in important ways, to act in ways that we would expect from someone who genuinely believed the things that he or she professed to believe. […] Finally, the delusional belief’s role in subjects’ emotional lives seems to be circumscribed as well. Subjects often do not seem to experience the sorts of affective responses that we would expect from someone who believed that, for example, his or her spouse had been replaced by an impostor. […] Imagining displays the right kind of evidence independence. In order to imagine that P, I needn’t have any evidence that P. Further, getting evidence that not-P or noticing that I already had such evidence needn’t interfere at all with my continuing to imagine that P. […] Classifying delusions as straightforward, paradigmatic cases of belief is problematic because it predicts that delusions ought not to display the sorts of circumscription and evidence independence that they in fact display. Classifying them as straightforward, paradigmatic cases of imagination is [however also] problematic because it predicts that they should display more circumscription and evidence independence than they in fact display. What would be nice would be to be able to say that the attitude is something in between paradigmatic belief and paradigmatic imagination — that delusional subjects are in states that play a role in their cognitive economies that is in some respects like that of a standard-issue, stereotypical belief that P and in other respects like that of a standard-issue, stereotypical imagining that P.”

“Some cases of self-deception seem to display the same sort of peculiar circumscription and insensitivity to evidence that’s characteristic of delusions. In these cases, we may have the same sort of reluctance to say that self-deceivers genuinely believe that the relevant proposition is true; however, it also doesn’t seem right to say that they merely desire that it’s true, either. Instead, they seem to be in an intermediate state between belief and desire. […] This would allow for the self-deceiver’s “belief” to be insensitive to evidence for its falsity in the same way as a desire, and yet to play some part of the behavior-guiding role of belief. It would also allow us to account for cases in which the self-deceiver’s “belief” has an impoverished behavior-guiding and inferential role, as seems to be the case sometimes. I don’t want to suggest that this is the right account of self-deception in general. I suspect that self-deception is a many-splendored thing and that there won’t be any single, unified account of self-deception in general to be found. Instead, I want to suggest that this sort of intermediate-state account is the right way to describe what’s going on in some restricted class of cases that might plausibly fall under the heading of “self-deception”—the ones that display the same peculiar sort of evidence independence and circumscription that we see in delusions.
There are, broadly speaking, two sorts of accounts of the origin of intermediate attitudes. […] In the first account, the representations are peculiar from the get-go: Something goes wrong in the original construction of the representation, so it winds up with a nonstandard role in the subject’s cognitive economy. In the second sort of account, the problematic representations start off with a fairly standard functional role and then they drift into some intermediate area. […] The self-deception account […] fits best with this second kind of origin. The most plausible sort of story seems to be one on which some representations start life as desires, but eventually acquire some aspects of the functional role of a belief.”

December 5, 2014 - Posted by | Books, Medicine, Neurology, Psychology

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.