Metacognition – Cognitive and Social Dimensions (III)
This will be my last post about the book.
Below I have added some observations from the last chapters of the book and a few comments.
“A critical dimension of social judgment is […] that reality, desired beliefs, and rules of justification [e.g. norms] combine to shape people’s reactions. […] One key feature affecting the normative level of adequacy, however, is that perceivers are notoriously ill-equipped when it comes to scrutinizing their own cognitive processes (for reviews, see Metcalfe & Shimamura, 1994; Nelson, 1992). In other words, although people are expected to call upon their metacognitive abilities to assess the quality of their knowledge about others, they are not very good at identifying the various ingredients comprising their judgment nor, for that matter, are they good at pinpointing the factors which led them to form a specific impression […] all current perspectives on person perception underline the fact that perceivers are extremely quick at categorizing others on the basis of a minimal amount of information. Categories provide people with a host of information about a specific target. Perceivers are thus likely to know quite a bit about any given person simply because of his or her category membership. The critical question then becomes to determine how exactly people are to interpret the resulting impression. Are perceivers in a position to disentangle the individuating from the category-based pieces of information? The answer seems to be that they are not. […] The message of [Nisbett & Wilson’s] provocative review of the literature is that people have little or no direct access to the processes that lead to particular contents of the mind. As a result, naive theories play a major role in people’s accounts of why they think what they think or why they do what they do.”
The idea behind this naive theories view seems to be that we often don’t really know why we think the way we do, and in the absence of actual knowledge about the true factors involved when we’re making judgments about others we make up our own theories for why we think the things we do. One way they’ve researched such aspects of cognitive processing has been to expose people to information about a target which included a bogus judgment mediator (some participants were told that aside from what was obviously supposed to be relevant information about the target, they had also received subliminal information about the target which they would not be able to be consciously aware of…) and asking the judges to make judgments about the target; it turns out that people tend to correct for supposed biases in their thought processes even when these really do not exist. The study they talked about in that chapter found that people exposed to the ‘subliminal information’ were more cautious about making judgments about the target because they thought they might have been manipulated, meaning that they ended up discounting relevant knowledge in order to correct for what really were non-existing inputs to the social judgment. Other approaches have indicated by exposing people to different types of information that people tend to believe they’re more justified in making judgments about others when they have more individuating information about the target. A related finding is this:
“perceivers may […] be very sensitive to the mode of acquisition of the information. Specifically, people may have more confidence in the evidence that they themselves gathered than in information they passively received. […] [Some] findings lend credit to the idea that perceivers who control the acquisition of the information express more confident and polarized ratings. They remain silent, however, as far as the underlying process is concerned.”
Here’s some more stuff about the ‘naive theories’ and related stuff on bias correction mechanisms from a later chapter in the book:
“What metacognitive processes do people use to ensure that their assessments of and feelings toward targets are “accurate” or “legitimate?” In brief, we believe that corrections (i.e. attempts at removing bias from assessments of targets) are often the result of people consulting their naive theories (beliefs) of how potentially biasing factors have influenced (or might yet influence) their views of the target. […] Identification of possible bias is guided, in part, by people’s beliefs or theories about how factors in the judgment setting (factors both internal and external to the perceiver) influence perceptions of targets. Some naive theories are likely to be stored in memory and are then accessed when salient features of the biasing factor are present in the judgment setting. At times, however, naive theories of bias are likely to be generated on-line as biasing factors are encountered in a given situation. Of course, stored theories of bias might also be amended or otherwise changed by experience of the specific biasing factor in the given judgment setting. These perceptions of bias are “naive theories” in that a given perceiver is not likely to have direct access to the effect of the factor(s) on his or her judgments, nor is he or she likely to possess the evidence that would be necessary to know the influence of the factor on the perceptions of others […]. Thus, the person’s naive perception or theory of the effect of the factor is the person’s best estimate of the effect of the factor, regardless of whether that perception is in any way accurate or not (in fact, these theories will often be incorrect in either direction or magnitude).”
“If the perceiver believes that a bias is operating, and if the perceiver is both motivated and able to attempt corrections, then the perceiver engages in a correction guided by the theory of bias. Many different factors could influence motivation or ability to engage in corrections. For instance, some people are more motivated to engage in thoughtful activities in general (e.g. they are high in need for cognition, Cacioppo & Petty, 1982) or are more motivated to avoid “incorrect” judgments in particular (i.e. they are high in fear of invalidity; Thompson, Naccarato, & Parker, 1989). Of course, situational variations in motivation to put effort into a task or to avoid inaccuracy could also influence motivation to engage in corrections. It is also possible for people to identify a bias, but to be unmotivated to correct for it because the bias is viewed as legitimate or even necessary […]. Similarly, either situational or personal factors could distract perceivers or otherwise induce a cognitive load that would decrease ability for theory-based corrections […]. Interestingly, if people are highly motivated to correct and are attempting to do so, but are unable to accomplish this because of the imposition of a cognitive load, bias might even be exaggerated in some circumstances. That is, when people are actively attempting to suppress a thought under cognitive load, this thought can become more accessible than when the thought is not being suppressed […], and this can lead to the thought having a greater contaminating effect on judgment […]. In addition to cognitive load, qualities of the uncorrected perceptions of the target could also influence ability to correct”
“We assume that corrective processes ensue when people become aware of a potential bias (and are motivated and able to engage in corrections). People can become aware of a potential bias before, during, or after judging (or even encountering) the target. Accordingly, corrections for bias need not occur only after reacting to the target, but people might also anticipate a bias and attempt to avoid it by changing how information about the target is gathered or scrutinized. We regard such attempts at avoidance of bias as “preemptive corrections” […]. Especially before people have a great deal of experience with attempts to correct for a given biasing factor, such attempts would likely depend on some level of conscious awareness of the potential bias. However, with more experience of the factor and of the correction process, less conscious awareness of the bias might be sufficient for instigating the correction process (and the correction process itself might become less effortful, that is, to a certain extent, routinized […]). In fact, even in those cases where rather conscious awareness of the biasing factor occurs, we would not generally expect the whole of the correction process to be consciously reportable (consistent with Nisbett & Wilson, 1977). Rather, even if people are able to directly report the content of a given theory of bias, those same people might be unable to report which theory(ies) were used most in a correction, for example (i.e. even if content of a theory of bias is “explicit,” there can still be “implicit” effects of the theory […] corrections are driven by the perceptions of the bias in that judgment setting. That is, corrections are aimed at removing perceived rather than actual bias. Although perceived and actual bias might coincide in certain circumstances, the two elements are conceptually distinct from one another. That is, a person might believe that a particular bias exists (and might attempt to remove that perceived bias) when no bias exists or even when a bias in the opposite direction is objectively present. […] A variety of factors might determine the nature of theories of bias and the likelihood that those theories guide corrective attempts. […] In many settings, the theory of bias that is used is probably some combination of a theory stored in memory along with adjustments to the theory based on the perceiver’s subjective experience of the context and target to be judged.”
“some theory-based corrections are likely to be more cognitively taxing and more thorough than others. In related areas (e.g. attitude change), assessments based on high levels of effortful, integrative elaboration of the qualities of targets lead those assessments of the target to persist over time, resist future attempts at change, and predict behavior better than less elaborated assessments […]. We believe that the same principles apply to corrections as well. If a corrective attempt involves effortful consideration of the qualities of the target and corrections of those considerations, such a correction is more likely to persist over time, resist change, and predict future judgments and behavior than a correction that is not based on a thorough scrutiny of target qualities. A variety of factors might help to determine the extent to which corrections involve elaboration of target-relevant information (e.g. the extent of target-relevant knowledge the person possesses, the importance to the person of arriving at an unbiased assessment of the target, time pressures for judgment, etc.).”
“It is not uncommon to avoid stimuli that we think will elicit negative emotions […] As noted by Wilson and Brekke (1994), people are often at risk of mental contamination, defined as “the process whereby a person has an unwanted judgment, emotion, or behavior because of mental processing that is unconscious or uncontrollable” […]. Wilson and Brekke argued that people’s susceptibility to mental contamination is in part a function of the accuracy of lay theories about how the mind operates. The strategies people use to avoid contamination – such as covering their eyes or changing the [TV] channel – are largely a function of their theories about how their attitudes, emotions, and beliefs change.”
“If people truly want to avoid unwanted belief change, they are better off using earlier mental strategies such as exposure control and mental preparation [rather than later strategies such as resistance and remediation (roughly: belief adjustment/correction which takes place after exposure)]. […] people might be better off if they recognized the limits of resistance, remediation, and behavior control and engaged in some judicious exposure control and mental preparation. […] The belief that other people would be more influenced by mental contaminants than oneself has been found in a variety of other studies […]. One reason for this difference, we suggest, is that people believe that they have a greater ability to resist or remediate the effects of a contaminant than other people do.”
“Obviously, people can make judgments and, obviously, people can provide rationales for their judgments, but what is the relation between the two? […] Do people consciously consider various features of the judgment situation, weigh the positive and negative features of the context and target. develop a conscious understanding of the effects of these features, and then make judgments based upon that understanding? Or do people arrive at judgments for reasons of which they may not be entirely aware, and only later attempt to piece together why they might have made the judgments they did? Each of these positions has its proponents in current social judgment theorizing. […] [We conclude from] our own research on judgmental correction processes [that] people’s conscious theories of social judgment are generally not causal, a priori, or accurate. Rather, these theories are descriptions of what people think they observed themselves doing while forming a judgment, and these theories influence judgments primarily when people are sensitized to (e.g. warned about) a particularly salient bias.”
No comments yet.