i. “One must wait until the evening to see how splendid the day was; one cannot judge life until death.” (Charles de Gaulle)
ii. “Courage is what it takes to stand up and speak. Courage is also what it takes to sit down and listen.” (Winston Churchill)
iii. “The intensity of people’s views on a topic is inversely proportional to the amount of evidence available on the topic.” (Jeffrey Hammer)
iv. “Silence is the best resolve for him who distrusts himself.” (Rochefoucauld)
v. “Old men delight in giving good advice as a consolation for the fact that they can no longer set bad examples.” (-ll-)
vi. “No man is clever enough to know all the evil he does.” (-ll-)
vii. “However we distrust the sincerity of those whom we talk with, we always believe them more sincere with us than with others.” (-ll-)
viii. “No people are more often wrong than those who will not allow themselves to be wrong.” (-ll-)
ix. “We have few faults which are not far more excusable than the means we adopt to hide them.” (-ll-)
x. “We should earnestly desire but few things if we clearly knew what we desired.” (-ll-)
xi. “Those who quit their proper character to assume what does not belong to them, are for the greater part ignorant both of the character they leave and of the character they assume.” (Edmund Burke)
xii. “Flattery corrupts both the receiver and the giver.” (-ll-)
xiii. “Patience, n. A minor form of despair, disguised as a virtue.” (Ambrose Bierce, The Devil’s Dictionary)
xiv. “Preference, n. A sentiment, or frame of mind, induced by the erroneous belief that one thing is better than another.” (-ll-)
xv. “Religion, n. A daughter of Hope and Fear, explaining to Ignorance the nature of the Unknowable.” (-ll-)
xvi. “Ultimatum, n. In diplomacy, a last demand before resorting to concessions.” (-ll-)
xvii. “Idiot, n. A member of a large and powerful tribe whose influence in human affairs has always been dominant and controlling.” (-ll-)
xviii. “I share no man’s opinions; I have my own.” (Ivan Turgenev)
xix. “You can discover what your enemy fears most by observing the means he uses to frighten you.” (Eric Hoffer)
xx. “Virtue never has been as respectable as money.” (Mark Twain)
In this post I’ll cover a bit more of Hargie’s stuff. I once again refer to this post for general comments and observations.
“There is now a considerable volume of research into the accuracy of first impressions (AFI). Those who score highly in terms of AFI tend to be more socially skilled, popular with peers, experience lower levels of loneliness, depression and anxiety, have higher quality of personal relationships, and achieve more senior positions and higher salaries at work. In their review of this area, Hall and Andrzejewski (2008: 98) concluded: ‘A large amount of research shows that it is good to be able to draw accurate inferences about people based on first impressions.’ [...] we make evaluations of others at a very early stage and based upon minimal evidence [...] Thus, Willis and Todorov (2006) demonstrated that after as little as one-tenth of a second we have made inferences based upon the facial appearance of the other person, and we then tend to become anchored on this initial judgement. Our early perceptions influence our expectations, and this in turn shapes our behaviour. Initial perceptions also impact upon subsequent processing, since we tend to adapt any conflicting information to make it fit more easily with our existing cognitive frame (Adler et al., 2006).” [See also Funder]
“People organise their physical spaces to make statements about their identity [...] we form impressions of individuals based on how they organise their spaces. [...] Thus, the nature of the environment affects initial impressions. [...] The age, sex, dress and general appearance of the other person all affect the initial perceptual set that is induced [...] In their review of this area, Whetzel and McDaniel (1999: 222) concluded: ‘Interviewers’ reactions to job candidates are strongly influenced by style of dress and grooming. Persons judged to be attractive or appropriately groomed or attired, received higher ratings than those judged to be inappropriately dressed or unattractive.’ [...] in their review of the field Roehling et al. (2008: 392) concluded: ‘Research indicates that overweight job applicants and employees are stereotypically viewed as being less conscientiousness, less agreeable, less emotionally stable, and less extraverted than their “normal-weight” counterparts.’”
“In many interpersonal transactions, one encounter is influenced by decisions made and commitments undertaken in the previous meeting. [...] ‘When people meet and interact with new acquaintances, they often use expectations about what these other people will be like to guide their interactions.’ In this way, people approach social encounters with certain explicit or implicit expectations, which they expect to have fulfilled (Hamilton, 2005). If expectations are unrealistic or misplaced, it is important to discover this and make it clear at a very early stage.”
“Those with a high need for closure [see this post for details on this variable] are more heavily influenced by first impressions as they search for aspects to seize upon in terms of decision making. They desire clearly structured interactions with transparent goals, and readily accept the need to bring an encounter to an end in a neat and tidy manner. On the other hand, individuals with a low need for closure are less likely to make judgements based upon initial information. They prefer interactions that are loosely structured with less clear-cut goals, and they can be difficult to persuade that it is time to terminate an interaction. As a result, with this type of person, closure can be more prolonged and messier.”
“a crucial determinant of assertion is motivation to act rather than lack of understanding of how to be assertive.”
“submissive people laugh much more at the humour of dominant individuals than vice versa”
“In reviewing research in this field, Rakos (1991) illustrated how nonassertive individuals emit roughly equal numbers of positive and negative self-statements in conflict situations whereas assertive people generate about twice as many positive as negative self-statements. [...] Nonassertive individuals have a higher frequency of negative self-statements and a greater belief that their behaviour will lead to negative consequences.”
“Nonassertive responses involve expressing oneself in such a self-effacing, apologetic manner that one’s thoughts, feelings and rights can easily be ignored. [...] The objective here is to appease others and avoid conflict at any cost. [...] Nonassertive individuals: • tend to avoid public attention • use minimal self-disclosure or remain silent so as not to receive criticism for what they say • are modest and self-deprecating • use self-handicapping strategies whereby they underestimate potential future achievements so as to avoid negative evaluation if they fail • if they have to engage with others, prefer to play a passive, friendly and very agreeable role. [...] Assertive responses involve standing up for oneself, yet taking the other person into consideration. The assertive style involves: • answering spontaneously • speaking with a conversational yet firm tone and volume • looking at the other person • addressing the main issue • openly and confidently expressing personal feelings and opinions • valuing oneself equal to others • being prepared to listen to the other’s point of view [...] Verbal aggression has been defined as ‘behavior that attacks an individual’s self-concept in order to deliver psychological pain’ [...] Such behaviours include attacks on one’s ability, character or appearance, name calling, profanity, the use of demands, blunt directives and threats – all of which violate the rights of the other person. Using this style, the aggressor: • interrupts and answers before the other is finished speaking • talks loudly and abrasively • glares at the other person • speaks ‘past’ the issue (accusing, blaming, demeaning) • vehemently and arrogantly states feelings and opinions in a dogmatic fashion • values self above others • hurts others to avoid personal hurt. [...] Assertiveness forms the mid-point of this continuum and is usually the most appropriate response. Aggressive individuals tend to be viewed as intransigent, coercive, overbearing and lacking in self-control. They may initially get their own way by browbeating and creating fear in others, but they are usually disliked and avoided. Alternatively, this style may provoke a similar response from others, with the danger that the verbal aggression may escalate [...] Nonassertive individuals, on the other hand, are often viewed as weak, ‘mealymouthed’ creatures who can be easily manipulated, and as a result they frequently express dissatisfaction with their lives, owing to a failure to attain personal goals. They may be less likely to inspire confidence in others or may even be seen as incompetent. [...] One common pitfall is that individuals move from prolonged nonassertion straight into aggression, feeling they can no longer put up with being used, taken for granted or having their rights ignored. But such a sudden and unexpected explosion of anger is not the best approach, and indeed can destroy relationships.”
“There is consistent research evidence to show that standard direct assertion is viewed as being as effective as and more socially desirable than aggressive behaviour, and more socially competent but distinctly less likeable than nonassertion [...]. It seems that assertiveness is evaluated positively in theory, but when faced with the practical reality is rated less favourably than nonassertion [...] while people tend to respect assertive individuals, they often do not like to have to deal with assertive responses. [...] We like and probably have more empathy for nonassertive people. Thus, assertion needs to be used sensitively. Assertiveness can provoke a number of adverse reactions. This may especially be the case when a change in style from submissiveness to assertiveness is made.”
“Humans, like all animals, are territorial and our sense of place is very important to how we respond [...] it is easier to be assertive when we are on our own ground. [...] Few individuals are assertive across all contexts. Most find it easier to assert themselves in some situations than in others. Attention needs to be devoted to situations in which the individual finds it difficult to be assertive, and strategies devised to overcome the particular problems.”
“The main nonverbal assertive behaviours are: medium levels of eye contact; avoidance of inappropriate facial expressions; smooth use of gestures while speaking, yet inconspicuous while listening; upright posture; direct body orientation; medium interpersonal distance; and appropriate paralinguistics (short response latency, medium response length, good fluency, medium volume and inflection, increased firmness).”
“In terms of definition, while the terms influence and persuasion are often viewed as synonyms and used interchangeably, in fact there are [some] differences between the two processes [...] Knowles and Riner (2007) argued that persuasion is used to attempt to overcome some level of resistance to the message [...] persuasion always involves influence [which does not require resistance to be present, US], but influence does not always involve persuasion. [...] resistance can take many forms. Yukl (2010) identified six main variants: overt refusal to carry out the request; explanations or excuses as to why the request cannot be complied with; attempts to persuade the agent to alter or withdraw the request; appeals to a higher authority to have the request removed; delays in responding so that the requested action is not carried out; and pretending to comply while secretly attempting to sabotage the assignment. [...] Targets can [...] be encouraged to become more resistant to persuasion messages. There are two main methods whereby this can be achieved: forewarning and inoculation. [...] [Forewarning] relates to the process wherein the target audience is told something about the person or message they are about to encounter. [...] [Innoculation] is a stronger form of forewarning as it actively prepares targets to refute the messages that will be received”
“One issue that is linked to forewarning is whether or not there is any delay between warning and message delivery. In a meta-analysis of research in this field, the main conclusion reached by Benoit (1998: 146) was that: ‘Forewarning an audience to expect a persuasive message tends to make that message less persuasive . . . regardless of type of warning . . . (or) presence of delay.’ However, Benoit found that to be effective the warning must come before the persuasion attempt. A message does not lose its persuasiveness if the warning is given after it has been delivered. [...] It appears that when targets are forewarned they adopt a less receptive frame of mind and become more resistant to the perceived ‘interference’. Overall, it is best not to have a forewarned target when making a persuasion attempt. [...] The success of inoculation is affected by two main processes – delay and decay. Delay refers to the time it takes the target to generate counter-arguments with which to resist the message, while decay relates to the extent to which these arguments lose their force over time [...]. Techniques that reduce delay and protect against decay are therefore important in maximising the effectiveness of inoculation. Thus, it has been shown that the more effort that targets devote to the development of counter-arguments to possible future challenges by engaging in what has been termed cognitive work, the greater is their resistance to later counter-persuasion attempts [...]. Another useful antidote to delay and decay is the technique of rote learning. Many religions and cults get members to rote learn sets of beliefs, prayers and key statements (e.g. biblical passages) so that they become embedded in their psyche, and as such very resistant to change. As part of this process, it is possible to get individuals to rote learn refutational arguments against future counter-messages. A related tactic here is that of anchoring, which involves connecting the forthcoming new message to an already established belief or set of values. It then becomes difficult to change one without the other. [...] Another form of pre-emption has been termed stealing thunder, which involves disclosing incriminating evidence about oneself or one’s client, rather than have this revealed by someone else. [...] [Williams and Dolnik's] review of [the] research concluded that: ‘Stealing thunder has been shown to be an effective method of minimizing the impact of damaging information in a variety of different contexts’”
“Those who are able to administer punishments also have considerable power. [...] There is a symbiotic relationship between reward and coercive power. Usually someone who can reward us can ipso facto also punish us (e.g. by withholding the rewards). [...] we tend to like those who reward us and dislike those who threaten or punish us.”
“Our behaviour is shaped to a considerable extent by our wish to belong to and be accepted by certain groups of people [...] we are likely to adopt the response patterns of those we identify with, like, and to whose group we aspire. [...] Referent power is most potent under two conditions. First, if we are uncertain about how to behave in a situation we follow the ‘herd instinct’ by looking to members of our reference group for guidance, and copying what they do. [...] Second, we are more likely to be influenced by similar others. [...] We are much more likely to be influenced by those with whom we have developed a close relationship. [...] As we get to know people our liking for them tends to increase, providing this takes place within a conducive context. Even a few minutes of initial relational communication with a stranger prior to making a persuasion attempt can significantly increase the success rate”
“people who are perceived to be experts, in that they have specialised knowledge or technical skill, have high persuasive power. Here the basis of the power is the extent to which the agent is seen as an authority. [...] in terms of expert power we give less credence to those whom we regard as honest if they are incompetent, and tend to have lower trust in those with a vested interest regardless of their level of expertise. One highly influential event is when people argue against their own interest.”
“In terms of delivery of arguments, these are more persuasive when there is a powerful speech style, wherein the person speaks in a firm, authoritative tone and uses intensifiers [...] By contrast, a powerless style is characterised by five main features:
1. hesitations (‘Um . . . ah . . .’)
2. hedges or qualifiers (‘I sort of think . . .’, ‘It might possibly be . . .’)
3. disclaimers (‘I don’t have any real knowledge of this area, however . . .’, ‘I might be wrong, but . . .’)
4. tag questions (‘. . . don’t you think?’, ‘. . . isn’t it?’), and statements made with a questioning intonation
5. lower voice volume.
In a review of the area, Durik et al. (2008) concluded ‘that messages with hedges led to less persuasion, more negative perceptions of the source, and weaker evaluations of the argument’. In their meta-analytic review of research in this area, Burrell and Koper (1998) found that a powerful speech pattern was perceived to be more credible and persuasive. Likewise, in their analysis of this field, Holtgraves and Lasky (1999: 196) concluded: ‘A speaker who uses powerless language will be perceived as less assertive, competent, credible, authoritative, and in general evaluated less favorably than a speaker who uses powerful language.’ If uncertainty has to be expressed, a powerful style should employ authoritative doubt, which underlines that the dubiety is from a vantage point of expertise”
“Repetition of arguments has been shown to be effective in increasing their persuasive power [...] Statements heard more than once tend to be rated as more valid than those heard for the first time – an effect known as the illusion of truth. [...] Song and Schwarz (2010: 111) in their review of research [concluded]: ‘The mere repetition of a statement facilitates its perception as true.’ [...] [One] question is whether a speaker should have a clear and explicit conclusion at the end of an argument or leave this implicit and allow the audience to draw it out for themselves [...]. The evidence here is clear: ‘Messages with explicit conclusions are more persuasive than those with implicit conclusions’ (O’Keefe, 2006: 334). [...] An important decision is whether to use one-sided or two-sided arguments. In other words, should the disadvantages of what is being recommended also be recognised? Research findings show that one-sided messages are best with those who already support the view being expressed. [...] When preaching to the converted it is necessary to target the message in a single direction. One-sided messages are also better with those of a lower IQ, who may become confused if presented with seemingly contradictory arguments [...] Two-sided arguments are more appropriate with those with a higher IQ.”
“As a core emotion, threatening messages that heighten our sense of fear can be very effective in changing attitudes and behaviour [...] In a meta-analysis of research into fear arousal and persuasion, Mongeau (1998: 65) concluded: ‘Overall, increasing the amount of fear-arousing content in a persuasive message is likely to generate greater attitude and behavior change.’ However, he also found that the use of this tactic was not always successful. Fear is more effective with older subjects (i.e. with adults as opposed to schoolchildren) and with low anxiety individuals. With highly anxious people it may backfire, so that the heightened anxiety induced by an intense fear scenario can inhibit attention and increase distraction. This in turn reduces comprehension, or results in the message either being ignored completely or rejected. Thus, if people already have very high levels of fear about a subject, attempting to increase this even further has been shown to be counterproductive”
“Most people are susceptible to appeals to conscience, in the form of reminders that we have a duty to ‘do the right thing’, and that if we do not fulfil our moral obligations we will feel bad about ourselves. [...] In his review of the area, O’Keefe (2006) [however] noted that guilt may backfire in that it may cause the target not to change their behaviour in line with previous attitudes and beliefs, but instead to change those attitudes and beliefs to be consistent with the new behaviour. O’Keefe also illustrated that while more explicit guilt appeals do induce greater guilt, less explicit guilt appeals are actually more effective in changing behaviour. The reason for this seems to be that more explicit guilt appeals induce greater resentment or anger in the target, and this tempers the success of the appeal. [...] Since we do not like to be made to feel guilty, we tend to dislike the person who has caused this to occur, and we are then more likely to avoid them in future.”
“The dodo (Raphus cucullatus) is an extinct flightless bird that was endemic to the island of Mauritius, east of Madagascar in the Indian Ocean. Its closest genetic relative was the also extinct Rodrigues solitaire, the two forming the subfamily Raphinae of the family of pigeons and doves. [...] Subfossil remains show the dodo was about 1 metre (3.3 feet) tall and may have weighed 10–18 kg (22–40 lb) in the wild. The dodo’s appearance in life is evidenced only by drawings, paintings and written accounts from the 17th century. Because these vary considerably, and because only some illustrations are known to have been drawn from live specimens, its exact appearance in life remains unresolved. Similarly, little is known with certainty about its habitat and behaviour.”
“The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors, their domesticated animals, and invasive species introduced during that time. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered it to be a mythical creature. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected from Mauritius [...] The dodo was anatomically similar to pigeons in many features. [...] The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium. [...] Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from pigeons have been attributed to their flightlessness. [...] The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes.” [If the last sentence sparked your interest and/or might be something about which you'd like to know more, I have previously covered a great book on related topics here on the blog]
“The etymology of the word dodo is unclear. Some ascribe it to the Dutch word dodoor for “sluggard”, but it is more probably related to Dodaars, which means either “fat-arse” or “knot-arse”, referring to the knot of feathers on the hind end. [...] The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens.“
“Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey for sailors. Although some scattered reports describe mass killings of dodos for ships’ provisions, archaeological investigations have found scant evidence of human predation. [...] The human population on Mauritius (an area of 1,860 km2 or 720 sq mi) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources. At the same time, humans destroyed the dodo’s forest habitat. The impact of these introduced animals, especially the pigs and macaques, on the dodo population is currently considered more severe than that of hunting. [...] Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth.”
Some of the contemporary accounts and illustrations included in the article, from which behavioural patterns etc. have been inferred, I found quite depressing. Two illustrative quotes and a contemporary engraving are included below:
“Blue parrots are very numerous there, as well as other birds; among which are a kind, conspicuous for their size, larger than our swans, with huge heads only half covered with skin as if clothed with a hood. [...] These we used to call ‘Walghvogel’, for the reason that the longer and oftener they were cooked, the less soft and more insipid eating they became. Nevertheless their belly and breast were of a pleasant flavour and easily masticated.“
“I have seen in Mauritius birds bigger than a Swan, without feathers on the body, which is covered with a black down; the hinder part is round, the rump adorned with curled feathers as many in number as the bird is years old. [...] We call them Oiseaux de Nazaret. The fat is excellent to give ease to the muscles and nerves.“
“The Armero tragedy [...] was one of the major consequences of the eruption of the Nevado del Ruiz stratovolcano in Tolima, Colombia, on November 13, 1985. After 69 years of dormancy, the volcano’s eruption caught nearby towns unaware, even though the government had received warnings from multiple volcanological organizations to evacuate the area when volcanic activity had been detected in September 1985.
As pyroclastic flows erupted from the volcano’s crater, they melted the mountain’s glaciers, sending four enormous lahars (volcanically induced mudslides, landslides, and debris flows) down its slopes at 50 kilometers per hour (30 miles per hour). The lahars picked up speed in gullies and coursed into the six major rivers at the base of the volcano; they engulfed the town of Armero, killing more than 20,000 of its almost 29,000 inhabitants. Casualties in other towns, particularly Chinchiná, brought the overall death toll to 23,000. [...] The relief efforts were hindered by the composition of the mud, which made it nearly impossible to move through without becoming stuck. By the time relief workers reached Armero twelve hours after the eruption, many of the victims with serious injuries were dead. The relief workers were horrified by the landscape of fallen trees, disfigured human bodies, and piles of debris from entire houses. [...] The event was a foreseeable catastrophe exacerbated by the populace’s unawareness of the volcano’s destructive history; geologists and other experts had warned authorities and media outlets about the danger over the weeks and days leading up to the eruption.”
“The day of the eruption, black ash columns erupted from the volcano at approximately 3:00 pm local time. The local Civil Defense director was promptly alerted to the situation. He contacted INGEOMINAS, which ruled that the area should be evacuated; he was then told to contact the Civil Defense directors in Bogotá and Tolima. Between 5:00 and 7:00 pm, the ash stopped falling, and local officials instructed people to “stay calm” and go inside. Around 5:00 pm an emergency committee meeting was called, and when it ended at 7:00 pm, several members contacted the regional Red Cross over the intended evacuation efforts at Armero, Mariquita, and Honda. The Ibagué Red Cross contacted Armero’s officials and ordered an evacuation, which was not carried out because of electrical problems caused by a storm. The storm’s heavy rain and constant thunder may have overpowered the noise of the volcano, and with no systematic warning efforts, the residents of Armero were completely unaware of the continuing activity at Ruiz. At 9:45 pm, after the volcano had erupted, Civil Defense officials from Ibagué and Murillo tried to warn Armero’s officials, but could not make contact. Later they overheard conversations between individual officials of Armero and others; famously, a few heard the Mayor of Armero speaking on a ham radio, saying “that he did not think there was much danger”, when he was overtaken by the lahar.“
“The lahars, formed of water, ice, pumice, and other rocks, incorporated clay from eroding soil as they traveled down the volcano’s flanks. They ran down the volcano’s sides at an average speed of 60 kilometers (40 mi) per hour, dislodging rock and destroying vegetation. After descending thousands of meters down the side of the volcano, the lahars followed the six river valleys leading from the volcano, where they grew to almost four times their original volume. In the Gualí River, a lahar reached a maximum width of 50 meters (160 ft).
Survivors in Armero described the night as “quiet”. Volcanic ash had been falling throughout the day, but residents were informed it was nothing to worry about. Later in the afternoon, ash began falling again after a long period of quiet. Local radio stations reported that residents should remain calm and ignore the material. One survivor reported going to the fire department to be informed that the ash was “nothing”. [...] At 11:30 pm, the first lahar hit, followed shortly by the others. One of the lahars virtually erased Armero; three-quarters of its 28,700 inhabitants were killed. Proceeding in three major waves, this lahar was 30 meters (100 ft) deep, moved at 12 meters per second (39 ft/s), and lasted ten to twenty minutes. Traveling at about 6 meters (20 ft) per second, the second lahar lasted thirty minutes and was followed by smaller pulses. A third major pulse brought the lahar’s duration to roughly two hours; by that point, 85 percent of Armero was enveloped in mud. Survivors described people holding on to debris from their homes in attempts to stay above the mud. Buildings collapsed, crushing people and raining down debris. The front of the lahar contained boulders and cobbles which would have crushed anyone in their path, while the slower parts were dotted by fine, sharp stones which caused lacerations. Mud moved into open wounds and other open body parts – the eyes, ears, and mouth – and placed pressure capable of inducing traumatic asphyxia in one or two minutes upon people buried in it.”
“The volcano continues to pose a serious threat to nearby towns and villages. Of the threats, the one with the most potential for danger is that of small-volume eruptions, which can destabilize glaciers and trigger lahars. Although much of the volcano’s glacier mass has retreated, a significant volume of ice still sits atop Nevado del Ruiz and other volcanoes in the Ruiz–Tolima massif. Melting just 10 percent of the ice would produce lahars with a volume of up to 200 million cubic meters – similar to the lahar that destroyed Armero in 1985. In just hours, these lahars can travel up to 100 km along river valleys. Estimates show that up to 500,000 people living in the Combeima, Chinchina, Coello-Toche, and Guali valleys are at risk, with 100,000 individuals being considered to be at high risk.”
iii. Asteroid belt (featured).
“The asteroid belt is the region of the Solar System located roughly between the orbits of the planets Mars and Jupiter. It is occupied by numerous irregularly shaped bodies called asteroids or minor planets. The asteroid belt is also termed the main asteroid belt or main belt to distinguish its members from other asteroids in the Solar System such as near-Earth asteroids and trojan asteroids. About half the mass of the belt is contained in the four largest asteroids, Ceres, Vesta, Pallas, and Hygiea. Vesta, Pallas, and Hygiea have mean diameters of more than 400 km, whereas Ceres, the asteroid belt’s only dwarf planet, is about 950 km in diameter. The remaining bodies range down to the size of a dust particle.”
“The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the planets, which in turn formed protoplanets. Between Mars and Jupiter, however, gravitational perturbations from Jupiter imbued the protoplanets with too much orbital energy for them to accrete into a planet. Collisions became too violent, and instead of fusing together, the planetesimals and most of the protoplanets shattered. As a result, 99.9% of the asteroid belt’s original mass was lost in the first 100 million years of the Solar System’s history.“
“In an anonymous footnote to his 1766 translation of Charles Bonnet‘s Contemplation de la Nature, the astronomer Johann Daniel Titius of Wittenberg noted an apparent pattern in the layout of the planets. If one began a numerical sequence at 0, then included 3, 6, 12, 24, 48, etc., doubling each time, and added four to each number and divided by 10, this produced a remarkably close approximation to the radii of the orbits of the known planets as measured in astronomical units. This pattern, now known as the Titius–Bode law, predicted the semi-major axes of the six planets of the time (Mercury, Venus, Earth, Mars, Jupiter and Saturn) provided one allowed for a “gap” between the orbits of Mars and Jupiter. [...] On January 1, 1801, Giuseppe Piazzi, Chair of Astronomy at the University of Palermo, Sicily, found a tiny moving object in an orbit with exactly the radius predicted by the Titius–Bode law. He dubbed it Ceres, after the Roman goddess of the harvest and patron of Sicily. Piazzi initially believed it a comet, but its lack of a coma suggested it was a planet. Fifteen months later, Heinrich Wilhelm Olbers discovered a second object in the same region, Pallas. Unlike the other known planets, the objects remained points of light even under the highest telescope magnifications instead of resolving into discs. Apart from their rapid movement, they appeared indistinguishable from stars. Accordingly, in 1802 William Herschel suggested they be placed into a separate category, named asteroids, after the Greek asteroeides, meaning “star-like”. [...] The discovery of Neptune in 1846 led to the discrediting of the Titius–Bode law in the eyes of scientists, because its orbit was nowhere near the predicted position. [...] One hundred asteroids had been located by mid-1868, and in 1891 the introduction of astrophotography by Max Wolf accelerated the rate of discovery still further. A total of 1,000 asteroids had been found by 1921, 10,000 by 1981, and 100,000 by 2000. Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing quantities.”
“In 1802, shortly after discovering Pallas, Heinrich Olbers suggested to William Herschel that Ceres and Pallas were fragments of a much larger planet that once occupied the Mars–Jupiter region, this planet having suffered an internal explosion or a cometary impact many million years before. Over time, however, this hypothesis has fallen from favor. [...] Today, most scientists accept that, rather than fragmenting from a progenitor planet, the asteroids never formed a planet at all. [...] The asteroids are not samples of the primordial Solar System. They have undergone considerable evolution since their formation, including internal heating (in the first few tens of millions of years), surface melting from impacts, space weathering from radiation, and bombardment by micrometeorites. [...] collisions between asteroids occur frequently (on astronomical time scales). Collisions between main-belt bodies with a mean radius of 10 km are expected to occur about once every 10 million years. A collision may fragment an asteroid into numerous smaller pieces (leading to the formation of a new asteroid family). Conversely, collisions that occur at low relative speeds may also join two asteroids. After more than 4 billion years of such processes, the members of the asteroid belt now bear little resemblance to the original population. [...] The current asteroid belt is believed to contain only a small fraction of the mass of the primordial belt. Computer simulations suggest that the original asteroid belt may have contained mass equivalent to the Earth. Primarily because of gravitational perturbations, most of the material was ejected from the belt within about a million years of formation, leaving behind less than 0.1% of the original mass. Since their formation, the size distribution of the asteroid belt has remained relatively stable: there has been no significant increase or decrease in the typical dimensions of the main-belt asteroids.“
“Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that it would be improbable to reach an asteroid without aiming carefully. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has 0.7–1.7 million asteroids with a diameter of 1 km or more. [...] The total mass of the asteroid belt is estimated to be 2.8×1021 to 3.2×1021 kilograms, which is just 4% of the mass of the Moon. [...] Several otherwise unremarkable bodies in the outer belt show cometary activity. Because their orbits cannot be explained through capture of classical comets, it is thought that many of the outer asteroids may be icy, with the ice occasionally exposed to sublimation through small impacts. Main-belt comets may have been a major source of the Earth’s oceans, because the deuterium–hydrogen ratio is too low for classical comets to have been the principal source. [...] Of the 50,000 meteorites found on Earth to date, 99.8 percent are believed to have originated in the asteroid belt.“
iv. Series (mathematics). This article has a lot of stuff, including lots of links to other stuff.
v. Occupation of Japan. Interesting article, I haven’t really read very much about this before. Some quotes:
“At the head of the Occupation administration was General MacArthur who was technically supposed to defer to an advisory council set up by the Allied powers, but in practice did everything himself. As a result, this period was one of significant American influence [...] MacArthur’s first priority was to set up a food distribution network; following the collapse of the ruling government and the wholesale destruction of most major cities, virtually everyone was starving. Even with these measures, millions of people were still on the brink of starvation for several years after the surrender.”
“By the end of 1945, more than 350,000 U.S. personnel were stationed throughout Japan. By the beginning of 1946, replacement troops began to arrive in the country in large numbers and were assigned to MacArthur’s Eighth Army, headquartered in Tokyo’s Dai-Ichi building. Of the main Japanese islands, Kyūshū was occupied by the 24th Infantry Division, with some responsibility for Shikoku. Honshū was occupied by the First Cavalry Division. Hokkaido was occupied by the 11th Airborne Division.
By June 1950, all these army units had suffered extensive troop reductions and their combat effectiveness was seriously weakened. When North Korea invaded South Korea (see Korean War), elements of the 24th Division were flown into South Korea to try to stem the massive invasion force there, but the green occupation troops, while acquitting themselves well when suddenly thrown into combat almost overnight, suffered heavy casualties and were forced into retreat until other Japan occupation troops could be sent to assist.”
“During the Occupation, GHQ/SCAP mostly abolished many of the financial coalitions known as the Zaibatsu, which had previously monopolized industry. [...] A major land reform was also conducted [...] Between 1947 and 1949, approximately 5,800,000 acres (23,000 km2) of land (approximately 38% of Japan’s cultivated land) were purchased from the landlords under the government’s reform program and resold at extremely low prices (after inflation) to the farmers who worked them. By 1950, three million peasants had acquired land, dismantling a power structure that the landlords had long dominated.“
“There are allegations that during the three months in 1945 when Okinawa was gradually occupied there were rapes committed by U.S. troops. According to some accounts, US troops committed thousands of rapes during the campaign.
Many Japanese civilians in the Japanese mainland feared that the Allied occupation troops were likely to rape Japanese women. The Japanese authorities set up a large system of prostitution facilities (RAA) in order to protect the population. [...] However, there was a resulting large rise in venereal disease among the soldiers, which led MacArthur to close down the prostitution in early 1946. The incidence of rape increased after the closure of the brothels, possibly eight-fold; [...] “According to one calculation the number of rapes and assaults on Japanese women amounted to around 40 daily while the RAA was in operation, and then rose to an average of 330 a day after it was terminated in early 1946.” Michael S. Molasky states that while rape and other violent crime was widespread in naval ports like Yokosuka and Yokohama during the first few weeks of occupation, according to Japanese police reports and journalistic studies, the number of incidents declined shortly after and were not common on mainland Japan throughout the rest of occupation. Two weeks into the occupation, the Occupation administration began censoring all media. This included any mention of rape or other sensitive social issues.”
“Post-war Japan was chaotic. The air raids on Japan’s urban centers left millions displaced and food shortages, created by bad harvests and the demands of the war, worsened when the seizure of food from Korea, Taiwan, and China ceased. Repatriation of Japanese living in other parts of Asia only aggravated the problems in Japan as these displaced people put more strain on already scarce resources. Over 5.1 million Japanese returned to Japan in the fifteen months following October 1, 1945. Alcohol and drug abuse became major problems. Deep exhaustion, declining morale and despair were so widespread that it was termed the “kyodatsu condition” (虚脱状態 kyodatsujoutai?, lit. “state of lethargy”). Inflation was rampant and many people turned to the black market for even the most basic goods. These black markets in turn were often places of turf wars between rival gangs, like the Shibuya incident in 1946.”
I have finished Hargie’s book. I don’t have a lot of stuff to say about the book which I haven’t already said, aside perhaps from the fact that the book actually, partly on account of its high page count, does have a lot of stuff which I feel tempted to cover here. I should note that I feel tempted to cover it here not only because it’s interesting, but also to a significant extent because I know this is one of those books I’ll never open/touch/whatever again, so the stuff I decide not to blog is stuff which I’m sure will be forgot. The first two posts dealt with roughly the first 200 pages, so there’s still a lot of stuff left to talk about. You can expect me to post at least one or two more posts about the stuff covered in the book in the days to come.
In this post I’ve covered material from the chapters on the skill of listening, the skill of explaining (very little, this was a bad chapter), and the skill of self-disclosure.
“One recurring problem is that we often listen with the goal of responding, rather than listening with the goal of understanding [...] our main concern is with our own point of view rather than with gaining a deeper insight into the other person’s perspective.”
“In interpersonal interaction a constant stream of feedback impinges upon us, both from the stimuli received from other people and from the physical environment. Not all of this feedback is consciously perceived, since there is simply too much information for the person to cope with adequately. As a result, a selective perception filter [...] is operative, and its main function is to filter only a limited amount of information into the conscious, while some of the remainder may be stored at a subconscious level. [...] Unfortunately, in interpersonal interaction, vital information can be filtered out, in that we may be insensitive to the social signals emitted by others. Where this occurs, effective listening skills are not displayed. [...] As we talk, at the same time we also scan for feedback to see how our messages are being received. When we listen, we evaluate what is being said, plan our response, rehearse this and then execute it. While the processes of evaluation, planning and rehearsal usually occur subconsciously, they are important because they can interfere with the pure listening activity. Thus, we may have decided what we are going to say before the other person has stopped speaking, and as a result may not be listening effectively. It is therefore important to ensure that those activities that mediate between listening and speaking do not interfere with the listening process itself.”
“We evaluate others based on their appearance, initial statements or what they said during previous encounters. These influence the way the speaker is heard, in that statements may be screened so that only those aspects that fit with specific expectations are perceived.”
“Those with a wider vocabulary are better listeners, since they can more readily understand and assimilate a greater range of concepts. [...] A listener who is highly motivated will remember more of the information presented. [...] Listening ability deteriorates as fatigue increases. Thus, someone who is extremely tired is less capable of displaying prolonged listening. [...] Introverts are usually better listeners than extraverts, since they are content to sit back and let the other person be the centre of attention. Furthermore, highly anxious individuals do not usually make good listeners since they tend to be too worried about factors apart from the speaker to listen carefully to what is being said.”
“The differential between speech and thought rate gives the listener an opportunity to assimilate, organise, retain and covertly respond to the speaker. However, this differential may also encourage the listener to fill up the spare time with other unrelated mental processes (such as daydreaming). Listening can be improved by using this spare thought time positively by, for example, asking covert questions such as:
• ‘What are the main points being made?’
• ‘What reasons are being given?’
• ‘In what frame of reference should this be viewed?’
• ‘What further information is necessary?’
Where a speaker exceeds 300 words per minute, listening can be problematic. It is difficult to listen effectively for an extended period to a very rapid speaker, since we cannot handle the volume of information being received. [...] The clarity, fluency and audibility of the speaker all have an influence on listener comprehension.”
“If the speaker displays high levels of emotion, the listener may be distracted by this and cease to listen accurately to the content of the verbal message. In situations where individuals are in extreme emotional states, their communication is inevitably highly charged. [...] When faced with a person experiencing extreme emotions [...] it is often not advisable either to reinforce positively or to rebuke the individual for this behaviour, since such reactions may well be counterproductive. [...] A more reasoned response is to react in a calm fashion, demonstrating an interest in, without overtly reinforcing, the emotional person, but also showing a willingness to listen and attempt to understand what exactly has caused this to occur. Only when strong emotional feelings begin to decrease can a more rational discussion take place.”
“If the speaker is regarded as an important person, or a recognised authority on a topic, listening comprehension is increased as more credence will be attached to what is being said. Also, more attention tends to be paid if the speaker is in a position of superiority. Attention is therefore greater if the listener has admiration and respect for a speaker of high credibility. [...] when the message conveys similar values, attitudes or viewpoints to our own, listening is facilitated [...] Effective listening is facilitated by paying attention to only one person at a time, and by manipulating the environment in order to ensure that extraneous distractions are minimised [...] If someone is expected to listen for a prolonged period, [...] comfortable seating has been shown to be an important factor for listening effectiveness [...] In group contexts, a compact seating arrangement is more effective than a scattered one. People pay more attention and recall more when they are brought close together physically, as opposed to when they are spread out around the room.”
“Someone who is self-conscious, and concerned with the personal impression being conveyed, is unlikely to be listening closely to others. In terms of research into memory, two identified problems relate to the process of inhibition [...]. Proactive inhibition occurs when something that has already been learned interferes with attempts to learn new material. A parallel problem is retroactive inhibition, which is where material that has already been learned is impaired as a result of the impact of, and interference from, recent material. In interpersonal encounters inhibition also occurs. Retroactive listening inhibition is where the individual is still pondering over the ramifications of something that happened in the recent past, at the expense of listening to the speaker in the present interaction. Proactive listening inhibition takes place when someone has an important engagement looming, and a preoccupation with this militates against listening in the present. The main mental focus then tends to be more about how to handle the future encounter than about what the speaker is currently saying.”
“Research has shown that speakers want listeners to respond appropriately to what they are saying rather than to ‘just listen’ [...]. In other words, they desire active listening in the form of both verbal and nonverbal behaviours. Active listening requires concerted effort and attention [...], as it involves showing that we have both heard and understood what the other person has communicated [...] When the listener shows warmth and enthusiasm for what we are saying, this attitude influences us and so we are likely to become more expressive and expansive. By contrast, when the listener is cold and formal, we are more likely to provide basic, factual responses. Although verbal responses are the main indicators of successful listening, if accompanying nonverbal behaviours are not displayed it is usually assumed that an individual is not paying attention, and ipso facto not listening. Thus, while these nonverbal signs may not be crucial to the assimilation of verbal messages, they are expected by others.”
“[One] aspect of reinforcement that is a potent indicator of effective listening is reference to past statements. This can range from simply remembering someone’s name to remembering other details about facts, feelings or ideas they may have expressed in the past. This shows a willingness to pay attention to what was previously discussed and in turn is likely to encourage the person to participate more fully in the present interaction.”
“[Some] nonverbal cues have been identified as signs of inattentiveness or lack of listening [...] The most common of these are: • inappropriate facial expressions • lack of eye contact • poor use of paralanguage (e.g. flat tone of voice, no emphasis) • slouched or shifting posture • absence of head nods • the use of distracting behaviours (e.g. rubbing the eyes, yawning, writing or reading while the speaker is talking). [...] These nonverbal signals can of course be deceiving, in that someone who is assimilating the verbal message may not appear to be listening. [...] Conversely, people may engage in pretend listening, or pseudo-listening, where they show all of the overt signs of attending but are not actually listening at all [...] both the verbal and nonverbal determinants of active listening play a key role in social interaction. In fact, these signs are integrated in such a fashion that, in most cases, if either channel signals lack of attention this is taken as an overall indication of poor listening. [...] It is important to realise that listening is not something that just happens, but rather is an active process in which the listener decides to pay careful attention to the speaker.”
“An explanation may be triggered by a direct request from someone who needs or wants to know something. Alternatively it may be the explainer who initiates the exchange. In the latter case, an important first step may be to create a ‘felt need’ on the part of recipients. They should have a sense that listening to what is about to be said will be worthwhile [...] An explanation that carries more detail than is necessary is just as defective as one that does not carry enough – a key feature of effective explanations is that they are succinct [...] One of the causes of punctuating speech with sounds such as ‘eh’ or ‘mm’ is trying to put too many ideas or facts across in one sentence. It is better to use reasonably short crisp sentences, with pauses in between, than long rambling ones full of subordinate clauses. This will generally tend to eliminate speech hesitancies. Another cause of dysfluency is lack of adequate planning and forethought. [...] Pausing briefly to collect and organise thought processes before embarking on an explanation can [...] facilitate fluent speech patterns. Added to that, planned pausing can help to increase understanding of the explanation.”
“When two people meet for the first time, it is more likely that they will focus upon factual disclosures (name, occupation, place of residence) while keeping any feeling disclosures at a fairly superficial level (‘I hate crowded parties’, ‘I like rock music’). This is largely because the expression of personal feelings involves greater risk and places the discloser in a more vulnerable position. [...] A gradual progression from low to high levels of self-disclosure leads to better relationship development. [...] Factual and feeling disclosures at a deeper level can be regarded as a sign of commitment to a relationship.”
“A self-disclosure can be about one’s own personal experience, or it can be about one’s personal reaction to the experiences being related by another [...] If the objective is to give concerted attention to an individual and encourage full disclosure, then concentrating upon one’s reactions to the feelings or thoughts of the other person would be most appropriate. If, however, the intention is to demonstrate that the person’s feelings are not unusual, then the use of a parallel self-disclosure relating one’s own experience would be more apposite.”
“[Valence] is the degree to which the disclosure is positive or negative for both discloser and listener. In the early stages of relationship development, disclosures are mainly positive, and negative self-disclosures usually only emerge once a relationship has been established. [...] Negative self-disclosures have been shown to be marked by paralinguistic cues such as stuttering, stammering, repetition, mumbling and low ‘feeble’ voice quality, whereas positive disclosures tend to be characterised by rapid, flowing, melodious speech [...] we expect others to make positive self-disclosures and so we become more alert upon receiving a negative disclosure. [...] negative information is attributed as possessing greater relevance than positive information (Yoo, 2009). Thus, the comparative rarity of negative disclosures, and their greater inferential power, mean that we need to use them with caution. [...] the use of negative disclosures tends to lead to more negative evaluations of the discloser. [...] While the continuous and indiscriminant use of negative self-disclosure is dysfunctional, the judicious application of such disclosures can actually facilitate relational development. [...] negative emotions should be expressed to those with whom one has a relationship, the depth of disclosed emotional state should be concomitant with the level of friendship, and the intensity of the disclosure should reflect the degree of emotional need. Given these parameters, Graham et al. showed that the disclosure of appropriate negative emotions increased ratings of likability, elicited offers of help and increased the level of relational intimacy.”
“Where there is a high degree of asymmetry in status, disclosure tends to be in one direction [...] workers may disclose personal problems to their supervisors, but the reverse does not usually happen. This is because for a supervisor to disclose personal information to a subordinate would cause a loss of face, which would affect the status relationship. Research findings tend to suggest that self-disclosures are most often employed between people of equal status [...] There would seem to be a relationship between psychological adjustment and self-disclosure in that individuals who are extremely high or low disclosers are regarded as less socially skilled.”
“Often, before we make a deep disclosure, there is a strategic process of testing [...] or advance pre-testing [...], whereby we ‘trail’ the topic with potential confidants and observe their reactions. If these are favourable, then we continue with the revelations; if not, we move on to a new topic. However, the initial dangers of self-disclosure are such that we expect an equal commitment to this process from people with whom we may wish to develop a relationship [...]. For this reason, reciprocation is expected in the early stages of everyday interaction. [...] a person’s disclosure increases the likelihood that the other party will also disclose. In everyday interaction, reciprocation of self-disclosures is the norm. [...] It is interesting to note that when people are not able to utilise interpersonal channels for disclosure, they often use substitutes such as keeping a personal diary, talking to a pet or conversing with God. Indeed, this need can be observed at an early stage in young children who often disclose to a teddy bear or doll.”
“People who do not have access to a good listener may not only be denied the opportunity to heighten their self-awareness, but they are also denied valuable feedback as to the validity and acceptability of their inner thoughts and feelings. By discussing these with others, we receive feedback as to whether these are experiences which others have as well, or whether they are less common. Furthermore, by gauging the reactions to our self-disclosures we learn what types are acceptable or unacceptable with particular people and in specific situations. [...] The appropriate use of self-disclosure is crucial to the development and maintenance of long-term relationships [...]. Those who disclose either too much or too little tend to have problems in establishing and sustaining relationships.”
“In their metaanalysis of research studies into gender differences in adults’ language use, Leaper and Ayres (2007) found that women used more self-disclosures than men. Dindia (2000b), in an earlier meta-analytical study, also found that females disclosed more than males, but this was moderated by the gender of the recipient, so that:
• females do not disclose to males any more than males do to males
• females disclose more to females than males do to males
• females disclose more to females than males do to females
• females disclose more to males than males do to females.”
“Personality variables have been shown to relate to disclosure level [...]. Shy, introverted types, those with low self-esteem and individuals with a high need for social approval disclose less, and social desirability is negatively related to depth of disclosure. Also those with an external locus of control [...] disclose less than those with an internal locus of control [...]. Lonely individuals have also been found to disclose less [...] Accepting/empathic people receive more disclosures. [...] ‘We like people who self-disclose to us, we disclose more to people we like, and we like others as a result of having disclosed to them’ [...] more self-disclosures tend to be made to individuals who are perceived as being similar (in attitudes, values, beliefs, etc.)”
“Solano and Dunnam (1985) showed that self-disclosure was greater in dyads than in triads, which in turn was greater than in a four-person group. They further found that this reduction applied regardless of the gender of the interactors and concluded that there may well be a linear decrease in self-disclosure as group size increases.”
I’ve been reading Hargie. I’ve now roughly read two-thirds of the book, it’s quite long (the pdf version has 629 pages). In the first part of this post I have added some general comments and observations I’ve made along the way so far while reading the book, and in the second part of the post I have covered some specific topics also covered in the book.
First off, the book has made it easier for me to understand the hostility some ‘hard scientists’ display towards what they like to call the social ‘sciences’. Some of the stuff in this book is horrible. In a way it has been nice to spend a bit of time with the sort of research that really gives social science a bad name; I’ve encountered some of that research before while doing work on psychology, but some of the stuff included here really takes the cake. Not all of it is useless or terrible, but there’s no argument that some of it is, and so while you’re reading about results from the book – I don’t suggest you actually read the book – you need to take some of them with a grain of salt or two. Or five. Even if I’ve tried to shy away from the worst of it.
An observation I was reminded of along the way was that whereas reporting the results of studies on a topic may in some contexts serve to indicate to others that you’re a Man Of Science and that you Really Know What You’re Talking About, it may also in other contexts do nothing of the sort, and in some specific contexts such activities may even completely undermine all trust other people might have in your ability to make judgments on such matters (on reflection I realize now that it seems almost certain that some of my posts on this blog in the past have had this effect on people reading along, in part because I don’t always make clear when I’m skeptical about specific findings because explicitly expressing skepticism in a specific context takes a lot more work than does the alternative – oh, well…). Some of Hargie’s coverage basically only served to convince me that I couldn’t trust his views on the science in these contexts, because of the way he talked about the results. Here’s an example:
“As discussed in Chapter 3, in reality deceit is not always so easy to detect. In fact, research has consistently shown that people are on average only 47 per cent accurate in detecting deception – that is less than chance.” (p. 250)
“Research has consistently shown that people are on average only 47 per cent accurate”. Yep, he actually wrote that. Research has consistently given the number 47 per cent, according to Hargie. I’m not convinced at this point that the author even knows what a standard deviation is. On a reasonably closely related note to the quote above, in my first post about the book I noted that Hargie had some obvious holes in his knowledge because it was clear to me that he hadn’t read Aureli et al. (nor the related literature on primatology/ethology). In later chapters it also turns out to be obvious that he’s never read Funder’s book, as some of the details there are simply missing from this coverage – important details, I might add, which I sort of hope Funder isn’t the only one who’s talked about. Along the same vein, some of the coverage of the attractiveness variable in the book would likely also have been better had the author read Bobbi Low, as a few of the results reported seem questionable given the coverage there.
I figured beforehand that I should read a book like this because it covers stuff which may well be useful for someone like me to know. An interesting observation I have made is that it turns out I might actually be quite a lot better at some of the aspects which relate to social interaction and communication than I’d imagined (though note that this observation does not really relate to anything covered in this post), even if some other aspects covered in this book do without doubt cause me serious trouble. I can justify reading on because good books on topics such as these are really hard to come by – most books dealing with topics like these are presumably self-help books, and I’m not reading those – and because some of the stuff really is not all that terrible, perhaps even quite useful. There’s enough semi-useful stuff mixed in with the crap for now to justify me reading on, though I’ve been close to just saying ‘enough’ more than once. I should note that the way I see it, by covering the book here I’m doing people reading along a major favour, as it’s my intention only to deal with the reasonably useful stuff in the post below and in the posts to come, meaning that you don’t need to go through all the crap as well. I figure that if I were to cover the bad stuff in detail, all I’d be doing would be venting and fueling my anger at the author, and I see no reason to go down that road as it would likely be counterproductive in terms of actually learning stuff from the book. I think I’ll probably give the book 1 star on goodreads once I’ve finished it because I dislike the way it is written, but I do plan to finish it. I’d like to learn more about social skills, including communication skills, in order to get better at figuring out how to improve them, and the way I usually go about learning stuff about an area about which I don’t know a great deal is to read a book on the topic. Well, this is the sort of book which is available here, so that’s what I’ve got to work with – and I feel like I have to try to make it work.
But it’s occasionally really hard to keep reading. The best way to illustrate which kind of book this is is perhaps by including this observation: The author mentions Star Trek in this textbook. He uses it as a lever to talk about cultural developments in the 60es. Frankly I think students who are forced to use this textbook in class should demand that their universities fire the incompetent clowns who decided that it would be a good idea to use this textbook in class (before switching course/major – seriously, if this book is a good example of the kinds of teaching materials on offer in this field, those students should be running for the hills).
With all that stuff out of the way, I have included some of the reasons why I kept reading anyway below. First I’ll cover a few details from the chapter on nonverbal communication which I did not get to talk about in my first post about the book, and then I’ll move on to cover some stuff from the chapter on reinforcement and the chapter about how to go about asking questions.
“Interactors of equal status tend to take up a closer distance than those of unequal status (Zahn, 1991). In fact, where a status differential exists, lower-status individuals will typically permit those of higher status to approach more closely than they would feel privileged to do. As the topic of conversation shifts to become more intimate than is comfortable for the other, that person may increase distance. Interpersonal distance is, therefore, part of [the] dynamic of nonverbal cues, including gaze and orientation, serving to regulate levels of intimacy and involvement.”
“Judge and Cable (2004) found that in the US workplace those who were six feet tall could expect to earn $166,000 more over a 30-year career span than those who were seven inches shorter. Similarly, Case and Paxson (2008) showed that in both the US and UK for every additional 10 centimetre (4 inches) height advantage, males earned between 4 to 10 per cent more, and females between 5 and 8 per cent more. [...] in a large Australian study [...] Kortt and Leigh (2010) [found that] a 10-centimetre increase in height was linked to a 3 per cent increase in pay for men, and a 2 per cent increase for women.”
“Mean amplitude (associated closely with loudness) and the extent to which amplitude varies around a mean value have been shown to be positively related to perceived dominance of the speaker (Tusing and Dillard, 2000). Speech rate, on the other hand, was negatively associated (i.e. the faster the rate, the lower the estimation of dominance). Anxiety is an emotional state likely to produce speech errors (Knapp and Hall, 2010).”
In broad terms [...] NVC [Non-Verbal Communication] compared to language tends to rely less on a symbolic code, is often represented in continuous behaviour, carries meaning less explicitly and typically conveys emotional/relational rather than cognitive/propositional information. [...] By means of NVC we can replace, complement, modify or contradict the spoken word. When it is suspected that the latter was done unintentionally and deceit is possible, nonverbal cues are often regarded as more truthful. We also regulate conversations through gestures, gaze and vocal inflection. Revealing emotions and interpersonal attitudes, negotiating relationships, signalling personal and social identity and contextualising interaction are further uses served by means of haptics, proxemics, kinesics and vocalics, together with physical characteristics of the person and the environment. We need information about other people’s qualities, attributes, attitudes and values in order to know how to deal with them. We often infer personality, attitudes, emotions and social status from the behavioural cues presented to us. [...] much of nonverbal meaning is inferred and can be easily misconstrued. It only suggests possibilities and must be interpreted in the overall context of not only verbal but also personal and circumstantial information.” [This is part of what I find really annoying about this kind of stuff - there is no list you can look up where a specific type of nonverbal behaviour only has one interpretation. People may yawn because they're tired, because they are bored, or perhaps simply because someone else yawned. For some people it's hard to tell the difference, and if your interpretation is wrong it may have unfortunate effects. I recently visited some acquaintances and grossly misinterpreted the body language of one of these acquaintances. Because I misinterpreted the body language of the acquaintance, I basically found myself getting mentally ready to leave at one point because I figured I was obviously boring them and I was assuming they were signalling this to me while thinking about how best to go about getting rid of me (anxiety and elevated rejection-sensitivity on my part probably played a role as well, but that's part of the point - lots of things play a role here). It turned out that I had completely misinterpreted the nonverbal communication and that they were having a good time - instead of being asked to leave at the point where I was considering suggesting that I leave to allow them to save face by not making them have to spell it out explicitly that I was no longer welcome, which would be awkward, I ended up staying for another two hours and received verbal feedback to the effect that they had found the interaction to be enjoyable and that they would like to repeat it in the future.]
“reinforcement is based ‘on the simple principle that whenever something reinforces a particular activity of an organism, it increases the chances that the organism will repeat that behavior’. [...] a reinforcer, by definition, has the effect of increasing the probability of the preceding behaviour. [...] reinforcement can be engineered through positive or negative means. The positive reinforcement principle states that ‘if, in a given situation, somebody does something that is followed immediately by a positive reinforcer, then that person is more likely to do the same thing again when he or she next encounters a similar situation’ [...] a reward is something given and received in return for something done. While it may act as a reinforcer, whether or not it actually does is an empirical question. [...] [In the context of negative reinforcement,] an act is associated with the avoidance, termination or reduction of an aversive stimulus that would have either occurred or continued at some level had the response not taken place. [...] Stated formally, the Premack principle proposes that activities of low probability can be increased in likelihood if activities of high probability are made contingent upon them. [...] the influence of rewards is wide ranging and can be indirect. Vicarious reinforcement is the process whereby individuals are more likely to adopt particular behaviours if they see others being rewarded for engaging in them.”
“During social encounters we not only welcome but demand a certain basic level of reward. If it is not forthcoming we may treat this as sufficient grounds for abandoning the relationship in favour of more attractive alternatives. [...] rewards can be administered in a planned and systematic fashion to selectively reinforce and shape contributions along particular lines. [...] there is little which is either original or profound in the proposition that people are inclined to do things that lead to positive outcomes and avoid other courses of action that produce unwanted consequences. This much is widely known. Indeed, the statement may seem so obvious as to be trivial. But [...] despite this general awareness, individuals are often remarkably unsuccessful in bringing about behavioural change in both themselves and other people. [...] In addition to influencing what recipients say or do, bestowing rewards also conveys information about the giver. Providers of substantial amounts of social reinforcement are usually perceived to be keenly interested in those with whom they interact and what they have to say. They also typically create an impression of being warm, accepting and understanding. [...] By contrast, those who dispense few social rewards are often regarded as cold, aloof, depressed or bored – as well as boring. [...] Positive reactions may not only produce more favourable impressions towards those who offer them, but can also result in heightened feelings of self-esteem and self-efficacy in the recipient.”
“People are not merely passive recipients of the reactions of others. Rather, they often make a deliberate effort to present themselves in such a way as to attract a particular type of evaluative response. One motive for this is self-enhancement. Through a process of impression management or self-presentation individuals go out of their way to make themselves as appealing as possible to others [...] For some, under certain circumstances, self-verification rather than self-enhancement is what counts [...]. Here, it is not necessarily a positive evaluation that is being sought, but rather one that is consistent with the individual’s existing self-referenced views and beliefs [...] These findings have interesting and significant ramifications for rewarding and reinforcing. For those with a poor self-concept and low self-esteem, praise and other positive reactions incongruent with how they regard themselves may not be appreciated and fail to have a reinforcing influence. Indeed, the opposite may be the case. [...] The success of praise as a reinforcer can be increased [...] by ensuring that it: • is applied contingently • specifies clearly the particular behaviour being reinforced • is offered soon after the targeted behaviour • is credible to the recipient • is restricted to those [...] who respond best to it [...] Praise does not always carry positive messages [...] A reward for doing very little provides no useful feedback and does not increase the individual’s sense of competence. [...] When seen as an attempt at cynical manipulation, rewards are also likely to be counterproductive [...] It is important that social rewards are perceived as genuinely reflecting the source’s reaction to the targeted person or performance.”
“The administration of reinforcement is not solely dependent upon the verbal channel of communication. It has been established that a number of nonverbal behaviours, such as a warm smile or an enthusiastic nod of the head, can also have a reinforcing impact on the behaviour of the other person during interaction. [...] The nonverbal channel is particularly adept at communicating states and attitudes such as friendliness, interest, warmth and involvement. [...] The establishment of eye contact is usually a preparatory step when initiating interaction. During a conversation, continued use of this behaviour is an indicator of our responsiveness to the other, and level of involvement in the exchange. Its selective use can, therefore, have reinforcing potential. [...] Proximity reinforcement refers to potential reinforcing effects that can accrue from altering the distance between oneself and another during interaction. A reduction in interpersonal distance usually accompanies a desire for greater intimacy and involvement. However, while someone who adopts a position at some distance from the other participant may be seen as being unreceptive and detached, a person who approaches too closely may be regarded as overfamiliar, dominant or even threatening.”
“It is not necessary to reinforce constantly each and every instance of a specific response for that class of response to be increased. It has been found that, following an initial period of continual reinforcement to establish the behaviour, the frequency of reinforcement can be reduced without resulting in a corresponding reduction in target behaviour. This is called intermittent reinforcement, and many real-life activities [...] are maintained in this way. [...] Accordingly, Maag (2003) recommended that rewards should be used sparingly to maximise their reinforcing efficacy. A related recommendation is that recipients have access to these only after performing the desired behaviour. Along similar lines, gain/loss theory predicts that when the receipt of a reward is set against a backdrop of a general paucity of positive reaction from that source, its effect will be enhanced [...] The continual and inflexible use of a specific reinforcer will quickly lead to that reinforcer losing its reinforcing properties. The recipient will become satiated. [...] An attempt should therefore be made to employ a variety of reinforcing expressions and behaviours [...] If reinforcement is delayed, there is a danger that other responses may intervene between the one to be promoted and the presentation of the reinforcer. Making the individual aware of the basis upon which the reinforcer, when it is delivered, is gained may help to reduce the negative effects of delay. [...] from a motivational viewpoint, the availability of immediate payoff is likely to have greater incentive value than the prospect of having to wait for some time for personal benefits to materialise. [...] selective reinforcement refers to the fact that it is possible to reinforce selectively certain elements of a response without necessarily reinforcing it in total. [...] Allied to this process, shaping permits nascent attempts at an ultimately acceptable end performance to be rewarded. By systematically demanding higher standards for rewards to be granted, performances can be shaped to attain requisite levels of excellence. The acquisition of most everyday skills [...] involve an element of shaping.”
“The most common division of questions relates to the degree of freedom, or scope, given to the respondent in answering. Those that leave the respondent open to choose any one of a number of ways in which to reply are referred to as open questions, while those that require a short response of a specific nature are termed closed questions. [...] Closed questions are usually easy to answer, and so are useful in encouraging early participation in an interaction. [...] Closed questions can usually be answered adequately in one or a very few words. They are restricted in nature, imposing limitations on the possible responses that the respondent can make. They give the questioner a high degree of control over the interaction [...] Open questions are broad in nature and require more than one or two words for an adequate answer. In general they have the effect of ‘encouraging clients to talk longer and more deeply about their concerns’ [...]. They are useful in allowing a respondent to express opinions, attitudes, thoughts and feelings. They do not require any prior knowledge on the part of questioners, who can ask open questions about topics or events with which they are not familiar. They also encourage the respondent to talk, thereby leaving the questioner free to listen and observe. This means, of course, that the respondent has a greater degree of control over the interaction and can determine to a greater extent what is to be discussed [...] An important advantage of open questions is that the respondent may reveal information that the questioner had not anticipated. [...] Answers to open questions may be time consuming and may also contain irrelevant or less vital information. [...] There is research evidence to suggest that a consistent sequence of questions facilitates participation and understanding in respondents [...] On the other hand, an erratic sequence of open and closed questions is likely to confuse the respondent and reduce the level of participation. Erratic sequences of questions [...] are common in interrogative interviews where the purpose is to confuse suspects [...] Generalisations about the relative efficacy of open or closed questions are difficult, since the intellectual capacity of the respondent must be taken into consideration. It has long been known that open questions may not be as appropriate with respondents of lower intellect.”
“questions allow the questioner to control the conversation ‘by requesting the addressee to engage with a specific topic and/or perform a particular responsive action’. [...] in most contexts it is the person of higher status, or the person in control, who asks the questions. Thus, the majority of questions are asked by teachers in classrooms, doctors in surgeries, nurses on the ward, lawyers in court, detectives in interrogation rooms and so on.”
“The blatantly incorrect simple leading question serves to place the respondent in the position of expert vis-à-vis the misinformed interviewer. As a result, the respondent may feel obliged to provide information that will enlighten the interviewer. Some of this information may involve the introduction of new and insightful material. While they can be effective in encouraging participation, it is not possible to state how and in what contexts simple leading questions can be most gainfully employed. In certain situations, and with particular types of respondent, their use is counterproductive. [...] It has been known for some time that the use of simple leads that are obviously incorrect can induce respondents to participate fully in an interview, in order to correct any misconceptions inherent in the question. [...] [However] [m]ost authors of texts on interviewing have eschewed this form of questioning as bad practice. [...] Research in interrogation consistently reveals that to be successful the interviewer must build up a rapport with the interviewee and appear to be nonjudgemental [...]. Good interrogators possess qualities such as genuineness, trustworthiness, concern, courtesy, tact, empathy, compassion, respect, friendliness, gentleness, receptivity, warmth and understanding. We disclose to such people – they seem to care and do not judge”
Is there any knowledge in the world which is so certain that no reasonable man could doubt it?
This text is included in Pojman’s book; I read it (Russell’s contribution to the book that is, not the book itself – that book has a lot of stuff…) a while back, but I haven’t really talked about it here on the blog. You can read it here.
I have included a few quotes from the text below. I also added a few personal remarks at the bottom of the post as well.
“In one sense it must be admitted that we can never prove the existence of things other than ourselves and our experiences. No logical absurdities results from the hypothesis that the world consists of myself and my thoughts and feelings and sensations, and that everything else is mere fancy. [...] There is no logical impossibility in the supposition that the whole of life is a dream, in which we ourselves create all the objects that come before us. But although this is not logically impossible, there is no reason whatever to assume that it is true”
“Of course it is not by argument that we originally come by our belief in an independent external world. We find this belief ready in ourselves as soon as we begin to reflect: it is what may be called an instinctive belief. [...] All knowledge, we find, must be built up upon our instinctive beliefs, and if these are rejected, nothing is left. But among our instinctive beliefs some are much stronger than others, while many have, by habit and association, become entangled with other beliefs, not really instinctive, but falsely supposed to be part of what is believed instinctively.
Philosophy should show us the hierarchy of our instinctive beliefs, beginning with those we hold most strongly, and presenting each as much isolated and as free from irrelevant additions as possible. It should take care to show that, in the form in which they are finally set forth, our instinctive beliefs do not clash, but form a harmonious system. There can never be any reason for rejecting one instinctive belief except that it clashes with others; thus, if they are found to harmonize, the whole system becomes worthy of acceptance.
It is of course possible that all or any of our beliefs may be mistaken, and therefore all ought to be held with at least some slight element of doubt. But we cannot have reason to reject a belief except on the ground of some other belief. Hence, by organizing our instinctive beliefs and their consequences, by considering which among them is most possible, if necessary, to modify or abandon, we can arrive, on the basis of accepting as our sole data what we instinctively believe, at an orderly systematic organization of our knowledge, in which, though the possibility of error remains, its likelihood is diminished by the interrelation of the parts and by the critical scrutiny which has preceded acquiescence.
This function, at least, philosophy can perform. Most philosophers, rightly or wrongly, believe that philosophy can do much more than this”
“Nothing can be known to exist except by the help of experience. [...] Rationalists believed that, from general consideration as to what must be, they could deduce the existence of this or that in the actual world. In this belief they seem to have been mistaken. All the knowledge that we can acquire a priori concerning existence seems to be hypothetical: it tells us that if one thing exists, another must exist, or, more generally, that if one proposition is true, another must be true. [...] the scope and power of a priori principles is strictly limited. All knowledge that something exists must be in part dependent on experience. When anything is known immediately, its existence is known by experience alone; when anything is proved to exist, without being known immediately, both experience and a priori principles must be required in the proof. Knowledge is called empirical when it rests wholly or partly upon experience. Thus all knowledge which asserts existence is empirical, and the only a priori knowledge concerning existence is hypothetical, giving connexions among things that exist or may exist, but not giving actual existence.”
“We may believe what is false as well as what is true. We know that on very many subjects different people hold different and incompatible opinions: hence some beliefs must be erroneous. Since erroneous beliefs are often held just as strongly as true beliefs, it becomes a difficult question how they are to be distinguished from true beliefs. [...some talk about the correspondence theory of truth] [...] minds do not create truth or falsehood. They create beliefs, but when once the beliefs are created, the mind cannot make them true or false, except in the special case where they concern future things which are within the power of the person believing, such as catching trains. What makes a belief true is a fact, and this fact does not (except in exceptional cases) in any way involve the mind of the person who has the belief.”
“to a great extent, the uncertainty of philosophy is more apparent than real: those questions which are already capable of definite answers are placed in the sciences, while those only to which, at present, no definite answer can be given, remain to form the residue which is called philosophy. This is, however, only a part of the truth concerning the uncertainty of philosophy. [...] The value of philosophy is [...] to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find, as we saw in our opening chapters, that even the most everyday things lead to problems to which only very incomplete answers can be given. Philosophy, though unable to tell us with certainty what is the true answer to the doubts which it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never travelled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect. [...] Philosophy is to be studied, not for the sake of any definite answers to its questions, since no definite answers can, as a rule, be known to be true, but rather for the sake of the questions themselves; because these questions enlarge our conception of what is possible, enrich our intellectual imagination and diminish the dogmatic assurance which closes the mind against speculation”
I know I’m repeating myself because I’ve said similar things in the past, but I still consider it an important point to add: Many people over time have wasted their lives pondering questions which they would not have been asking themselves if only they had known more stuff about the world. In my model of the world, people need to rely on knowledge to ask good questions, and the more one knows about the world, the better one gets at asking the right questions about it. Thinking about stuff is different from knowing stuff, and the payoff schedule related to ‘knowing more stuff’ for most people will look very different from the payoff schedule related to ‘thinking more about stuff’. People in general have much less knowledge than they ought to have in order to support the opinions they already hold.
On a related matter, if you’re repeatedly engaging yourself in the activity of asking questions to which no answers exist, you’re in my mind – I know some philosophers will disagree – quite likely to be asking the wrong questions and to be wasting your time.
After I’d read the book I googled the author and I came across this lecture, which is actually a really nice lecture about many of the ideas also included in the book:
The stuff covered during the last five minutes or so of the talk is not in the book – there’s no political theory or similar in there – but most of the other stuff is. The book is somewhat more theoretical than the lecture; there’s no stuff about vampire bats in there. It probably also goes without saying that the coverage in the book provides a lot more detail than does the lecture, which only really scratches the surface; the analytical level is quite a bit higher in the book.
The book is in my opinion an example of really good philosophy of science. I liked the book a lot, it’s really nicely written and the author seems to be a very precise and careful writer and thinker. There are pretty much no superfluous pages in the book, which also means that I’ve actually been a bit conflicted about how to blog it, because it seemed impossible to go over all those ideas in just a blog post or two. I suggest you watch the lecture; if you like the lecture and/or want to know more about the ideas presented there, you’ll want to read this book.
The book includes some equations here and there, but nothing you shouldn’t be able to handle. Some really important ideas in the book are not mentioned in the lecture, but this is natural given the format – there’s only so much stuff you can pack into one lecture. For example in any two-level setting including ‘particles’ and ‘collectives’, the question arises of how to even define collective (/’group’) fitness. One might define it as “the average or total fitness of its constituent particles; so the fittest collective is the one that contributes most offspring particles to future generations of particles.” Or one might define it as “the number of offspring collectives it leaves; so the fittest collective is the one that contributes the most offspring collectives to future generations of collectives.” The distinction between these two conceptualizations of collective fitness actually is really important in some analytical contexts, and this is definitely a distinction worth keeping in mind.
I may cover the book in more detail later, but for now I’ll limit coverage to the comments above and to the lecture. In my opinion it’s a really nice book, I gave it five stars on goodreads.
Some random observations and some links:
i. I’ve written about diabetic hypoglycemia before – I even blogged a book on the topic just a few weeks ago. So I’ll keep this short. Here’s the key observation from the post to which I link: “Hypoglycemia causes functional brain failure that is corrected in the vast majority of instances after the plasma glucose concentration is raised”.
Functional brain failure is pretty much what it sounds like – the brain stops working. The point I want to make here is that hypoglycemia can strike pretty much at any point in time, including when I’m doing stuff like blogging or commenting. I sometimes develop hypoglycemia while deeply engrossed in some intellectual activity, like reading, writing or chess, in part because in those situations I have a tendency to forget to listen to my body’s signals – perhaps I forget to eat because this stuff is really much more interesting than food, perhaps I don’t really care that I should probably take a blood test now because I’d really much rather just finish this book chapter/chess game/blogpost/whatever. That happens. When it happens while I’m blogging, what comes out the other end may look funny. I occasionally write stuff that’s incoherent and stupid. Sometimes the explanation is simple: I’m an idiot. Sometimes other things play a role as well.
This is a variable you cannot observe, but which I have a lot of information about. It’s a variable I’d like readers of this blog to at least be aware of.
ii. Maxwell wrote this post, which you should consider reading. I won’t pretend to have good reasons/justifications for disliking people I conceive of as arrogant, but I do want to note that I do this and always have. Arrogance is a trait I dislike immensely.
iii. Over the last few days I’ve been reading Okasha’s great book Evolution and the Levels of Selection (I’ve almost finished it and I expect to blog it tomorrow) – so of course when Zach Weiner came up with this joke yesterday, I laughed. Loudly:
(Click to view full size. The comic of course has almost nothing to do with the content of the book, but I’ll take any excuse I can get for blogging that comic…)
iv. The Feynman Lectures on Physics. Available to you, online, free of charge. Stuff like this sometimes makes me think we live in a very nice world at this point.
But then I read posts/watch videos like this one and I’m reminded that things are, complicated.
v. A few Khan Academy lectures:
i. “I intend no Monopoly, but a Community in Learning; I study not for my own sake only, but for theirs that study not for themselves.” (Thomas Browne)
ii. “No man can justly censure or condemn another, because indeed no man truly knows another.” (-ll-)
iii. “A cynic is what an idealist calls a realist.” (Humphrey Appleby)
iv. “Words are but the shadows of actions.” (Democritus, as quoted by Plutarch)
v. “To esteem everything is to esteem nothing.” (Molière)
vi. “Death is the inventor of God.” (José Saramago)
vii. “There are people. There are stories. The people think they shape the stories, but the reverse is often closer to the truth.” (Alan Moore)
viii. “Power concedes nothing without a demand. It never did and it never will. Find out just what any people will submit to, and you have found out the exact amount of injustice and wrong which will be imposed upon them; and these will continue till they are resisted with either words or blows, or with both. The limits of tyrants are prescribed by the endurance of those whom they oppress.” (Frederick Douglass)
ix. “All men think all men mortal but themselves.” (Edward Young)
x. “What ardently we wish we soon believe.” (-ll-)
xi. “Old age, after all, is merely the punishment for having lived.” (Emil Cioran)
xii. “We only do well the things we like doing.” (Colette)
xiii. “To die is poignantly bitter, but the idea of having to die without having lived is unbearable.” (Erich Fromm)
xiv. “I have grown accustomed to the disrespect expressed by some of the participants for their colleagues in the other disciplines. “Why, Dan,” ask the people in artificial intelligence, “do you waste your time conferring with those neuroscientists? They wave their hands about ‘information processing’ and worry about where it happens, and which neurotransmitters are involved, but they haven’t a clue about the computational requirements of higher cognitive functions.” “Why,” ask the neuroscientists, “do you waste your time on the fantasies of artificial intelligence? They just invent whatever machinery they want, and say unpardonably ignorant things about the brain.” The cognitive psychologists, meanwhile, are accused of concocting models with neither biological plausibility nor proven computational powers; the anthropologists wouldn’t know a model if they saw one, and the philosophers, as we all know, just take in each other’s laundry, warning about confusions they themselves have created, in an arena bereft of both data and empirically testable theories. With so many idiots working on the problem, no wonder consciousness is still a mystery. All these charges are true, and more besides, but I have yet to encounter any idiots. Mostly the theorists I have drawn from strike me as very smart people – even brilliant people, with the arrogance and impatience that often comes with brilliance – but with limited perspectives and agendas, trying to make progress on the hard problems by taking whatever shortcuts they can see, while deploring other people’s shortcuts. No one can keep all the problems and details clear, including me, and everyone has to mumble, guess and handwave about large parts of the problem.” (Daniel Dennett. I really liked this quote, which is why I included it in this post despite it being quite a bit longer than the quotes I usually include in posts like these)
xv. “There is no single, definitive “stream of consciousness,” because there is no central Headquarters, no Cartesian Theatre where “it all comes together” for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. [...] The basic specialists are part of our animal heritage. They were not developed to perform peculiarly human actions, such as reading and writing, but ducking, predator-avoiding, face-recognizing, grasping, throwing, berry-picking, and other essential tasks. They are often opportunistically enlisted in new roles, for which their talents may more or less suit them.” (Daniel Dennett)
xvi. “The evidence of evolution pours in, not only from geology, paleontology, biogeography, and anatomy (Darwin’s chief sources), but from molecular biology and every other branch of the life sciences. To put it bluntly but fairly, anyone today who doubts that the variety of life on this planet was produced by a process of evolution is simply ignorant — inexcusably ignorant, in a world where three out of four people have learned to read and write.” (-ll-)
xvii. “Those who fear the facts will forever try to discredit the fact-finders.” (-ll-)
xviii. “The sorrow for the dead is the only sorrow from which we refuse to be divorced. Every other wound we seek to heal — every other affliction to forget: but this wound we consider it a duty to keep open — this affliction we cherish and brood over in solitude.” (Washington Irving)
xix. “The desire of knowledge, like the thirst of riches, increases ever with the acquisition of it.” (Laurence Sterne)
xx. “What is today supported by precedents will hereafter become a precedent.” (Tacitus)
I gave the book four stars on goodreads. Jasper Fforde’s Thursday Next series consists of seven books (so far), and by finishing this one I have now read the entire series. If you have ideas for what I should read next (I’m currently reading Saramago’s The Gospel According to Jesus Christ, a birthday present, but I’ll finish that one very soon) you’re welcome to add suggestions in the comment section below; do note that I read this series as a direct consequence of a reader recommending it.
The book is funny and absurd just like the others in the series, but I liked some of the others a bit better; it’s occasionally a bit too silly for my taste.
I have added some sample quotes from the book below.
“he tended to look upon me as the daughter he’d wished he had, and not the one he did have, who was a bit of a tramp.”
“‘You seem quite young,’ said Landen.
‘It’s due to my age,’ said Phoebe”
“On a worktop near by lay a machine that could assemble itself into a machine that would be able to dissemble itself, the practical applications of which were somewhat obscure.”
“‘I remember turning off the M4 and on to the M5, but I couldn’t swear to it. Next thing I know I’m sitting on a bench in Carlisle railway station five days later with £40.000 in cash, eight kilos of bootleg Camembert in the car and a wife waiting for me in Wrexham.’
‘You explained all this to SO-5?’
‘Many times. Quinn was the same, only she “came to” a day sooner than me, upside down in a Mercedes she had bought for cash two hours previously. There was an iguana on the back seat and the boot was full of rabbits.’
I exchanged looks with Landen. One of Aornis’s little memory tricks was to make you think you were someone you weren’t, then send you off to cause mayhem on the five-day non-recall bender.”
“‘This is Geraldine,’ said Duffy, ‘the assistant’s assistant to the assistant personal assistant of my own personal assistant’s assistant.’ [...] ‘How many assistants do I have?’ I asked, turning back to Duffy.
‘Including me, three.’
‘Three? Given Geraldine’s job title? How is that possible?’
‘They have multiple jobs. Geraldine, apart from being the assistant’s assistant to the assistant personal assistant of my own personal assistant’s assistant, is also my own personal assistant’s assistant’s assistant.’
‘No,’ said Geraldine, ‘that’s Lucy. I’m not only your assistant’s assistant’s sub-assistant, but also the assistant to the assitant to your personal assistant’s assistant.’
‘Wait,’ I said, thinking hard, ‘that must make you your own assistant.’
‘Yes; I had to fire myself yesterday. Luckily I was also above the assistant who fired me, so I could reinstate myself.”
“‘Tell me,’ I said, ‘did the previous Chief Librarian really vanish without trace?’
‘Not entirely,’ said Duffy, passing me a photograph of a concrete monorail support somewhere on the Wantage branch line. ‘We were sent this.’
I stared at the photograph.
‘Did you tell the police?’
‘They said it was nothing and that people get sent pictures of concrete monorail supports all the time.’
‘No, not really.'”
“‘And house prices are tumbling,’ said another. ‘If I wanted to sell I’d have to accept half of what I paid for it.'”
‘And what did you pay for it?’ I asked. ‘Just out of interest.’
‘A hundred pounds. They’re dirt cheap because no one wants to live here.’
‘We get occasional backflashes too,’ said the fourth.
‘And what did you pay for it?’ I asked. ‘Just out of interest?’
‘A hundred pounds. They’re dirt cheap because no one wants to live here.’
‘We get occasional backflashes too,’ said the fourth, ‘but we only know that from external observers. Ooh, look,’ he added, pointing to a woman standing two hundred yards away who was waving a red flag, ‘Lori says we’ve just had one.'”
“‘Gavin, how did you turn out to be such a nasty piece of work?’
‘I could blame my parents but that’s just whiny victim bullshit. Some people are just naturally unpleasant. I’ve known for a long time that I’m something of a shit. I tried for years to hide it, but it never worked, so in the end I decided to just go with it, and see where it lead me. What’s your excuse?'”
I decided to write one more post (this one) about the book and leave it at that. Go here for my first post about the book, which has some general remarks about the book, as well as a lot of relevant links to articles from wikipedia which cover topics also covered in the book. Below I have added some observations from the second half of the book.
“Use of bedrock geology to reconstruct ancient continental positions relies on the idea that if two separated continents were once joined to form a single, larger continent, then there ought to be distinctive geological terranes (such as mineral belts, mountain chains, bodies of igneous rock of similar age, and other roughly linear to irregularly-shaped large-scale geologic features) that were once contiguous but are now separated. Matching of these features can provide clues to the positions of continents that were once together. [...] The main problem with using bedrock geology features to match continental puzzle pieces together is that many of the potentially most useful linear geologic features on the continents (such as volcanic arcs or chains of volcanoes, and continental margin fold belts or parallel mountain chains formed by compression of strata) are parallel to the edge of the continent. Therefore, these features generally run parallel to rift fractures, and are less likely to continue and be recognizable on any continent that was once connected to the continent in question.
Paleomagnetic evidence is an important tool for the determination of ancient continent positions and for the reconstruction of supercontinents. Nearly all rock types, be they sedimentary or igneous, contain minerals that contain the elements iron or titanium. Many of these iron- and titanium-bearing minerals are magnetic. [...] The magnetization of a crystal of a magnetic mineral (such as magnetite) is established immediately after the mineral crystallizes from a volcanic melt (lava) but before it cools below the Curie point temperature. Each magnetic mineral has its own specific Curie point. [...] As the mineral grain passes through the Curie point, the ambient magnetic field is “frozen” into the crystal and will remain unchanged until the crystal is destroyed by weathering or once again heated above the Curie point. This “locking in” of the magnetic signal in igneous rock crystals is the crucial event for paleomagnetism, for it indicates the direction of magnetic north at the time the crystal cooled (sometime in the distant geologic past for most igneous rocks). The ancient latitudinal position of the rock (and the continent of which it is a part) can be determined by measuring the direction of the crystal’s magnetization. For ancient rocks, this direction can be quite different from the direction of present day magnetic north. [...] Paleomagnetic reconstruction is a form of geological analysis that is, unfortunately, fraught with uncertainties. The original magnetization is easily altered by weathering and metamorphism, and can confuse or obliterate the original magnetic signal. An inherent limitation of paleomagnetic reconstruction of ancient continental positions is that the magnetic remanence only gives information concerning the rocks’ latitudinal position, and gives no clue as to the original longitudinal position of the rocks in question. For example, southern Mexico and central India, although nearly half a world apart, are both at about 20 degrees North latitude, and, therefore, lavas cooling in either country would have essentially the same primary magnetic remanence. One of the few ways to get information about the ancient longitudinal positions of continents is to use comparison of life forms on different continents. The study of ancient distributions of organisms is called paleobiogeography.”
“Photosynthesis is generally considered to be a characteristic of plants in the traditional usage of the term “plant.” Nonbiologists are sometimes surprised to learn that [some] animals are photosynthetic [...] One might argue that marine animals with zooxanthellae (symbiotic protists) are not truly photosynthetic because it is the protists that do the photosynthesis, not the animal. The protists just happen to be inside the animal. We would argue that this is not an important consideration, since photosynthesis in all eukaryotic (nucleated) cells is accomplished by chloroplasts, tiny organelles that are the cell’s photosynthesis factories. Chloroplasts are now thought by many biologists to have arisen by a symbiosis event in which a small, photosynthetic moneran took up symbiotic residence within a larger microbe [...]. The symbiotic relationship eventually became so well established that it became an obligatory relationship for both the host microbe and the smaller symbiont moneran. Reproductive provisions were made to pass the genetic material of the symbiont, as well as the host, on to succeeding generations. It would sound strange to describe an oak as a “multicellular alga invaded by photosynthetic moneran symbionts,” but that is — in essence — what a tree is. Animals with photosynthetic protists in their bodies are able to create food internally, in the same way that an oak tree can, so we feel that these animals can be correctly called photosynthetic. [...] Many of the most primitive types of living metazoa contain photosymbiotic
microbes or chloroplasts derived from microbes.”
“The most obvious reason for any organism, regardless of what kingdom it belongs to, to evolve a leaf-shaped body is to maximize its surface area. Leaf shape evolves in response to factors in addition to surface area requirement, but the surface area requirement, in all cases we are aware of, is the most important factor. [...] Leaves of modern plants and Ediacaran animals probably evolved similar shapes for the same reason, namely, maximization of surface area. [...] Photosymbiosis is not the only possible departure from heterotrophic feeding, the usual method of food acquisition for modern animals. Seilacher (1984) notes that flat bodies are good for absorption of simple compounds such as hydrogen sulfide, needed for one type of chemosymbiosis. In chemosymbiosis as in photosymbiosis, microbes (in this case bacteria) are held within an animal’s tissues as paying guests. The bacteria are able to use the energy stored in hydrogen sulphide molecules that diffuse into the host animal’s tissues. The bacteria use the hydrogen sulfide to create food, using biochemical reactions that would be impossible for animals to do by themselves. The bacteria use some of the food for themselves, but great excesses are produced and passed on to the host animal’s tissues. [...] There may be important similarities between the ecologies of
[...] flattened Ediacaran creatures and the modern deep sea vent faunas. [...] A form of chemotrophy (feeding on chemicals) that does not involve symbiosis is simple absorption of nutrients dissolved in sea water. Although this might not seem a particularly efficient way of obtaining food, there are tremendous amounts of “unclaimed” organic material dissolved in sea water. Monerans allow these nutrients to diffuse into their cells, a fact well known to microbiologists. Less well known is the fact that larger organisms can feed in this way also. Benthic foraminifera up to 38 millimeters long from McMurdo Sound, Antarctica, take up dissolved organic matter largely as a function of the surface area of their branched bodies”
“Although there is as of yet no unequivocal proof, it seems reasonable to infer from their shapes that members of the Ediacaran fauna used photosymbiosis, chemosymbiosis, and direct nutrient absorption to satisfy their food needs. Since these methods do not involve killing, eating, and digesting other living things, we will refer to them as “soft path” feeding strategies. Heterotrophic organisms use “hard path” feeding strategies because they need to use up the bodies of other organisms for energy. The higher in the food pyramid, the “harder” the feeding strategy, on up to the keystone predator (top carnivore) at the top of any particular ecosystem’s trophic pyramid. It is important to note that the term “hard,” as used here, does not necessarily imply that autotrophic organisms have any easier a time obtaining their food than do heterotrophic organisms. Green plants are not very efficient at converting sunlight to food; sunlight can be thought of as an elusive prey because it is not a concentrated energy source [...]. Low food concentrations are a major difficulty encountered by organisms employing soft path feeding strategies. Deposit feeding is intermediate between hard and soft paths. [...] Filter feeding, or capturing food suspended in the water, also has components of both hard and soft paths because suspension feeders can take both living and nonliving food from the water.”
“Probing deposit feeders [...] began to excavate sediments to depths of several centimeters at the beginning of the Cambrian. Dwelling burrows several centimeters in length, such as Skolithos, first appeared in the Cambrian, and provided protection for filter-feeding animals. If a skeleton is broadly defined as a rigid body support, a burrow is in essence a skeleton formed of sediment [...] Movement of metazoans into the substrate had profound implications for sea floor marine ecology. One aspect of the environment that controls the number and types of organisms living in the environment is called its dimensionality [...]. Two-dimensional (or Dimension 2) environments tend to be flat, whereas three-dimensional environments (Dimension 3) have, to a greater or lesser degree, a third dimension. This third dimension can be either in an upward or a downward direction, or a combination of both directions. The Vendian sea floor was essentially a two-dimensional environment. [...] With the probable exception of some of the stalked frond fossils, most Vendian soft-bodied forms hugged the sea floor. Deep burrowers added a third dimension to the benthos (sea floor communities), creating a three-dimensional environment where a two-dimensional situation had prevailed. The greater the dimensionality in any given environment, the longer the food chain and the taller the trophic pyramid can be [...]. If the appearance of abundant predators is any indication, lengthening of the food chain seems to be an important aspect of the Cambrian explosion. Changes in animal anatomy and intelligence can be linked to this lengthening of the food chain. Most Cambrian animals are three-dimensional creatures, not flattened like many of their Vendian predecessors. Animals like mollusks and worms, even if they lack mineralized skeletons, are able to rigidify their bodies with the use of a water-filled internal skeleton called a coelom [...] This fluid-filled cavity gives an animal’s body stiffness, and acts much like a turgid, internal, water balloon. A coelom allows animals to burrow in sediment in ways that a flattened animal (such as, for instance, a flatworm) cannot. It is most likely that a coelom first evolved in those Vendian shallow scribble-trail makers that were contemporaries of the large soft-bodied fossils. Some of these Ediacaran burrows show evidence of peristaltic burrowing. Inefficient peristaltic burrowing can be done without a coelom, but with a coelom it becomes dramatically more effective.”
“Bilateral symmetry is important when considering the behavior of [...] early coelomate animals. The most likely animal to evolve a brain is one with bilateral symmetry. Concomitant with the emergence of animals during the Vendian was the origin of brains. The Cambrian explosion was the first cerebralization or encephalization event. As part of the increase in the length of the food chain discussed above, higher-level consumers such as top or keystone predators established a mode of life that requires the seeking out and attacking of prey. These activities are greatly aided by having a brain able to organize and control complex behavior. [...] Specialized light receptors seem to be a characteristic of all animals and many other types of organisms; [...] photoreceptors have originated independently in at least forty and perhaps as many as sixty groups. Most animal phyla have at a minimum several pigmented eye spots. But advanced vision (i. e., compound or image-forming eyes) tied directly into a centralized brain is not common or well developed until the Cambrian. The tendency to have eyes is more pronounced for bilateral than for radial animals. [...] some of the earliest trilobites had large compound eyes. Trilobites were probably not particularly smart by modern standards, but chances are that their behavioral capabilities far outstripped any that had existed during the early Vendian. [...] Actively moving or vagile predators are, as a rule, smarter than their prey, because of the more rigorous requirements of information processing in a predatory life mode. Anomalocaris as a seek-and-destroy top predator may have been the brainiest Early Cambrian animal.”
“why didn’t brains and advanced predation develop much earlier that they did? A simple, thought experiment may help address this problem. Consider a jellyfish 1 mm in length and a cylindrical worm 1 mm in length. Increase the size (linear dimension) of each (by growth of the individual or by evolutionary change over thousands of generations) one hundred times. [...] The worm will need internal plumbing because of its cylindrical body. The jellyfish won’t be as dependent on plumbing because its body has a higher surface area. [...] Our enlarged, 10 cm long worm will possess a brain which has a volume one million times greater than the brain of its 1 mm predecessor (assuming that the shape of the brain remains constant). The jellyfish will also get more nerve tissue as it enlarges. But its nervous system is spread out in a netlike fashion; at most, its nerve tissue will be concentrated at a few radially symmetric points. The potential for complex and easily reprogrammed behavior, as well as sophisticated processing of sensory input data, is much greater in the animal with the million times larger brain (containing at least a million times as many brain cells as its tiny predecessor). Complex neural pathways are more likely to form in the larger brain. This implies no mysterious tendency for animals to grow larger brains; perfectly successful, advanced animals (echinoderms) and even slow-moving predators (sea spiders) get along fine without much brain. But centralized nerve tissue can process information better than a nerve net and control more complex responses to stimuli. Once brains were used to locate food, the world would never again be the same. This can be thought of as a “brain revolution” that permanently changed the world a half billion years ago.”
“There is little doubt that organisms produced oxygen before 2 billion years ago, but this oxygen was unable to accumulate as a gas because iron dissolved in seawater combined with the oxygen to form rust (iron oxide), a precipitate that sank, chemically inactive, to accumulate on the sea floor. Just as salt has accumulated in the oceans over billions of years, unoxidized (or reduced) iron was abundant in the seas before 2 billion years ago, and was available to “neutralize” the waste oxygen. Thus, dissolved iron performed an important oxygen disposal service; oxygen is a deadly toxin to organisms that do not have special enzymes to limit its reactivity. Once the reduced iron was removed from sea water (and precipitated on the sea floor as Precambrian iron formations; much of the iron mined for our automobiles is derived from these formations), oxygen began to accumulate in water and air. Life in the seas was either restricted to environments where oxygen remained rare, or was forced to develop enzymes [...] capable of detoxifying oxygen. Oxygen could also be used by heterotrophic organisms to “burn” the biologic fuel captured in the form of the bodies of their prey. [...] Much research has focused on lowered levels of atmospheric oxygen during the Precambrian. The other alternative, that oxygen levels were higher at times during the Precambrian than at present has not been much discussed. Once the “sinks” for free oxygen, such as dissolved iron, were saturated, there is little that would have prevented oxygen levels in the Precambrian from getting much higher than they are today. This is particularly so since there is no evidence for the presence of Precambrian land plants which could have acted as a negative feedback for continued increases in oxygen levels” [Here's a recent-ish paper on the topic - do note that there's an important distinction to be made between atmospheric oxygen levels and the oxygen levels of the oceans].
“There are 14 main skill areas covered in this text, beginning with nonverbal communication (NVC) in Chapter 3. This aspect of interaction is the first to be examined, since all of the areas that follow contain nonverbal elements and so an understanding of the main facets of this channel facilitates the examination of all the other skills. Chapter 4 incorporates an analysis of reinforcement, while questioning is reviewed in Chapter 5. In Chapter 6, an alternative strategy to questioning, namely reflecting, is investigated. Reflection consists of concentrating on what another person is saying and reflecting back the central elements of that person’s statements.
The skill of listening is explored in Chapter 7, where the active nature of listening is emphasised, while explaining is focused upon in Chapter 8. In Chapter 9, self-disclosure is examined from two perspectives; first, the appropriateness of self-disclosure by the professional, and second, methods for promoting maximum self-disclosure from clients. Two important episodes in any action – the opening and closing sequences – are reviewed in Chapter 10. Techniques for protecting personal rights are discussed in Chapter 11 in terms of the skill of assertiveness. The skill area of influencing and persuading has attracted growing interest in recent years and this is covered in Chapter 12, and the related skill of negotiation is addressed in Chapter 13. Finally, in Chapter 14 the skills involved in interacting in and leading small group discussions are examined.”
I’m currently reading this book.
So far it seems like a pretty standard textbook. It’s pretty dense, in the sense that Hargie talks about a lot of things. On the other hand in other ways it is, not very dense; Hargie doesn’t spend any time on methodology and stuff like how to conduct research in this area (which seems curious to me, as I take this to be a first-year text). He does not seem to be particularly concerned about drawing conclusions from small studies from, say, the 70’es. On the plus side, in many cases you’re implicitly or explicitly made aware that these were small studies and/or that this result is only supported by a couple of studies. Unfortunately a few of the things Hargie writes about I take him to know a lot less about than I do – for example he’s obviously never read Aureli et al. – and one might well argue that his lack of knowledge is sort of a problem when you look at some specific parts of the coverage (perhaps more on account of topics not covered than on account of things actually covered, but there have been parts of the coverage where I noted in the margin that ‘we know a lot more about this kind of stuff than he lets on’, or ‘this would have been much more interesting if he’d also included the results of animal studies on related topics’). Most texts I read these days are written by multiple authors, and I do believe such books are in general of a higher quality than are single-author publications, in part because they’re much less likely to have blind spots like these. This book would probably (I can’t really say for sure at this point, as I’ve still yet to read most of it) have benefited from the inclusion of more biological research and less psychological research. But there are many interesting observations in this book, and most of the topics covered are both topics about which I don’t know much, and topics about which I’d probably benefit from knowing more. I have added some of the interesting observations from the book below.
“In the successful learning of new skills we move through the stages of unconscious incompetence (we are totally unaware of the fact that we are behaving in an incompetent manner), conscious incompetence (we know what we should be doing and we know we are not doing it very well), conscious competence (we know we are performing at a satisfactory level) and finally unconscious competence (we just do it without thinking about it and we succeed). This is also true of interpersonal skills. During free-flowing social encounters, less than 200 milliseconds typically elapses between the responses of speakers and rarely do conversational pauses reach three seconds. As a result, some elements, such as exact choice of words used and use of gestures, almost always occur without conscious reflection (Wilson et al., 2000). [...] Skilled responses are hierarchically organised in such a way that large elements, like being interviewed, are comprised of smaller behavioural units such as looking at the interviewer and answering questions. The development of interpersonal skills can be facilitated by training the individual to acquire these smaller responses before combining them into larger repertoires. Indeed, this technique is also used in the learning of many motor skills. [...] Skilled behaviour involves implementing behaviours at the most apposite juncture. Learning when to employ behaviours is just as crucial as learning what these behaviours are and how to use them.”
“It is widely agreed that relationships are shaped around two main dimensions that have to do with affiliation (or liking) and dominance, although a third concerning level of involvement or the intensity of the association also seems to be important [...] Power is also an important factor in human relationships [...]. When people with relatively little social power, occupying inferior status positions, interact with those enjoying power over them, the former have been shown [...] to manifest their increased ‘accessibility’ by, among other things:
• initiating fewer topics for discussion
• being more hesitant in what they say
• being asked more questions
• providing more self-disclosures
• engaging in less eye contact while speaking
• using politer forms of address
• using more restrained touch.
Sets of expectations are constructed around these parameters. It is not only the case that people with little power behave in these ways; there are norms or implicit expectations that they should do so.”
“There is evidence that introverts tend to speak less, make more frequent use of pauses, engage in lower frequencies of gaze at their partners, are less accurate at encoding emotion and prefer to interact at greater interpersonal distances ( John et al., 2008; Knapp and Hall, 2010).”
“Given the inherent fluidity of interaction, Berger (1995: 149) argued persuasively that ‘reducing the actions necessary to reach social goals to a rigid, script-like formula may produce relatively ineffective social action’. [...] Skilled communication must always be adaptively and reflexively responsive to the emotional needs of the other.”
“There is a reduced prospect of successful face-to-face interaction in situations where interactors have little appreciation of their own NVC [Non-Verbal Communication], or a lack of sensitivity to the other person’s body language. This is as applicable to work as it is to everyday social situations. [...] the verbal medium has often been set as a benchmark for assessing the significance of the nonverbal. Consider a situation where a person is saying something but conveying an altogether different message through NVC. Which holds sway? What are the relative contributions of the two to the overall message received? In early research, still frequently cited, it was estimated that overall communication was made up of body language (55 per cent), paralanguage (the nonverbal aspects of speech) (38 per cent) and the verbal content (7 per cent) (Mehrabian, 1972). It may come as something of a surprise to learn that what we say may contribute a mere 7 per cent to the overall message received. These proportions, however, should not be regarded as absolute and seriously underrepresent the contribution of verbal communication in circumstances where information from all three channels is largely congruent. Guerrero and Floyd (2006) offered a more modest estimate of 60 to 65 per cent of meaning carried nonverbally during social exchanges. While likewise questioning the veracity of the Mehrabian figures, a review by Burgoon et al. (1996) nevertheless still identified a general trend favouring the primacy of meaning carried nonverbally, with a particular reliance upon visual cues. But qualifying conditions apply. The finding holds more for adults in situations of message incongruity and where the message has to do with emotional, relational or impression-forming outcomes.”
“Detailed analyses have revealed some of the strategies used to prevent over-talk, handle it when it occurs, and generally manage turn-taking [...]. NVC is an important part of this process. Conversationalists are able to anticipate when they will have an opportunity to take the floor. Duncan and Fiske (1977) identified a number of nonverbal indices that offer a speaking turn to the other person. These include a rise or fall in pitch at the end of a clause, a drop in voice volume, termination of hand gestures and change in gaze pattern. In addition, they found that if a speaker persisted with a gesticulation even when not actually talking at that point, it essentially eliminated attempts by the listener to take over the turn.
Hence, someone [...] coming to the end of a speech turn will typically introduce a downward vocal inflection (unless they have just asked a question), stop gesticulating and look at their partner [...] NVC is a crucial source of information on how we feel, and how we feel about others. Furthermore, relating successfully depends upon competence in both encoding (sending) and decoding (interpreting) NVC [...] Facial expressions represent an important emotional signalling system, although body movements and gestures are also implicated. [...] six basic emotions consistently decodable are sadness, anger, disgust, fear, surprise and happiness, with contempt as a possible seventh. There is evidence that we may be specially attuned to process certain types of emotional information leading to the rapid recognition of anger and threat [...] questions have been raised over the extent to which facial expressions can be thought of as the direct products of underlying biologically determined affective states [...] An alternative way of viewing them is as a means of signalling behavioural intent.”
“Through largely nonverbal means, people establish, sustain, strengthen or indeed terminate a relational position. This can be done on an ongoing basis, as adjustments are made to ensure that levels of involvement are acceptable. Immediacy or psychological closeness is a feature of interaction that is regulated in part nonverbally, and indeed has been singled out as arguably the most important function of NVC [...]. Immediacy has to do with warmth, depth of involvement or degree of intensity characterising an encounter. It is expressed through a range of indices including eye contact, interpersonal distance, smiling and touch, and must be appropriate to the encounter. Violating expectations in respect of these, for example by coming too close, gazing too much, leaning too far forward or orienting too directly, can lead to discomfort on the part of the recipient, compensatory shifts by that person and negative evaluations of the violator”
“according to communication accommodation theory [...], interlocutors convey their attitudes about one another and indicate their relational aspirations by the extent to which they tailor aspects of their communicative performance to make these more compatible with those of the other. They may adjust their initial discrepant speech rate, for instance, to find a balanced compromise or alternatively accentuate difference if they find they have little desire to promote commonality, reduce social distance or seek approval. When individuals are actively managing personal relationships it would often be too disturbing to state openly that the other was not liked or thought to be inferior. Nonverbal cues can be exchanged about these states but without the message ever being made explicit.”
“Aspects of social power, dominance and affiliation are conveyed through nonverbal channels. Amount of talk (talk time), loudness of speech, seating location, posture, touch, gestures and proximity are instrumental in conveying who is controlling the situation as the dominant party in an interaction [...] Powerful individuals, when interacting with subordinates, tend to indulge in more non-reciprocated touch [...] Those displaying such behaviour also attract higher ratings of power and dominance than the recipients of that contact [...] That said, Andersen (2008) concluded from his review of the evidence that touch had actually more to do with conveying immediacy and intimacy than with status and dominance. [...] touch was one of the principal components of the expression of immediacy reviewed by Guerrero (2005). [...] haptic communication seems to change across the lifespan. Younger men (under 30 years) and those in dating relationships (rather than being married) have been found to touch more than females [...] marital status is also important in that unmarried men have more favourable reactions to touch than unmarried women, while for married males and females this pattern is reversed.”
“Posture is one of the cues used to make decisions about the relative status of those we observe and deal with [...] The degree of relaxation exuded seems to be a telling feature [...]. High-status individuals characteristically adopt a more relaxed position when they are seated (e.g. body tilting sideways; lying slumped in a chair) than low-status subjects who are more upright and rigid. When standing, people in a position of power and influence again appear more relaxed, often with arms crossed or hands in pockets, than those in subordinate positions who are generally ‘straighter’ and ‘stiffer’. Those with high status are also likely to take up more expansive postures, standing at their full height, chest expanded and with hands on hips (Argyle, 1988).”
“A seated person who leans forward towards the other is deemed to have a more positive attitude towards both the person and the topic under discussion than when leaning backwards [...] most prolonged interactions are conducted with both participants either sitting or standing, rather than one standing, and the other sitting. Where this situation does occur, communication is usually cursory (e.g. information desks) or strained (e.g. interrogation sessions). Relative posture adopted is a significant marker of how interactors feel about each other and of the relationship between them. Postural congruence or mirroring occurs when similar or mirror-image postures are taken up, with ongoing adjustments to maintain synchrony. Common matched behaviours include leg positions, leaning forward, head propping, facial expressions and hand and arm movements. This form of ‘mimicry’, which is usually carried out subconsciously, is taken as a positive sign that the exchange is harmonious. Research findings show that ‘mimicry serves an important social function in that it facilitates the smoothness of interactions and increases liking between interaction partners’ [...]. The evidence also indicates that we are more likely to mimic the verbal and nonverbal behaviour of people whom we like or are attracted to [...]. This means that we are in turn more likely to be attracted to those who mirror our behaviours. Thus, therapists who use matching postures are perceived by clients to be affiliative and empathic, and this in turn encourages greater interviewee disclosure (Hess et al., 1999).”
“Gaze refers primarily to looking at another in the facial area. Mutual gaze happens when the other reciprocates. This is sometimes also referred to as eye contact when the eyes are the specific target, although just how accurately we can judge whether someone is looking us directly in the eye or merely in that region of the face is open to debate. Associated terms are gaze omission where gaze is absent and gaze avoidance where it is intentionally being withheld. When gaze becomes fixed and focused in an intrusive way that may infringe norms of politeness, it becomes a stare and is associated with a different set of social meanings and potential reactions.
Gazing during social interaction can serve a variety of purposes. In an early analysis, Kendon (1967) suggested these were primarily to do with expressing emotional information, regulating interaction, revealing cognitive activity and monitoring feedback from the other. More recent classifications [...] are elaborate differentiations of these core functions, but add the further purpose of marking the relationship.”
“Catching someone’s eye is the necessary first step to opening up channels of communication and seeking contact with them. In a group discussion, patterns of gazing are used to orchestrate the flow of conversation, with members being brought into play at particular points. In dyads, a typical interactive sequence would be person A coming towards the end of an utterance looking at person B to signal that it is B’s turn to speak. B, in turn, looks away after a short period of mutual gaze to begin responding, especially if intending to speak for a long time, or if the message is difficult to formulate in words. Person A will continue to look reasonably consistently while B, as speaker, will have a more broken pattern of glances (Argyle, 1994). [...] We tend to avoid gaze when processing difficult material in order to minimise distractions. Thus, there is a greater likelihood of gaze being avoided when attempting to answer more difficult questions [...] speakers gaze periodically to obtain feedback and make judgements about how their message is being received and adjustments that may need to be made to their delivery. [...] We make more and longer eye contact with people we regard positively and from whom we expect a positive reaction”
“There are both costs and benefits associated with conducting scientific- and technological research. Whereas the benefits derived from scientific research and new technologies have often been addressed in the literature (for a good example, see Evenson et al., 1979), few of the major non-monetary societal costs associated with major expenditures on scientific research and technology have however so far received much attention.
In this paper we investigate one of the major non-monetary societal cost variables associated with the conduct of scientific and technological research in the United States, namely the suicides resulting from research activities. In particular, in this paper we analyze the association between scientific- and technological research expenditure patterns and the number of suicides committed using one of the most common suicide methods, namely that of hanging, strangulation and suffocation (-HSS). We conclude from our analysis that there’s a very strong association between scientific research expenditures in the US and the frequency of suicides committed using the HSS method, and that this relationship has been stable for at least a decade. An important aspect in the context of the association is the precise mechanisms through which the increase in HHSs takes place. Although the mechanisms are still not well-elucidated, we suggest that one of the important components in this relationship may be judicial research, as initial analyses of related data have suggested that this variable may be important. We argue in the paper that our initial findings in this context provide impetus for considering this pathway a particularly important area of future research in this field.”
“Murders by bodily force (-Mbf) make up a substantial number of all homicides in the US. Previous research on the topic has shown that this criminal activity causes the compromise of some common key biological functions in victims, such as respiration and cardiac function, and that many people with close social relationships with the victims are psychosocially affected as well, which means that this societal problem is clearly of some importance.
Researchers have known for a long time that the marital state of the inhabitants of the state of Mississippi and the dynamics of this variable have important nation-wide effects. Previous research has e.g. analyzed how the marriage rate in Mississippi determines the US per capita consumption of whole milk. In this paper we investigate how the dynamics of Mississippian marital patterns relate to the national Mbf numbers. We conclude from our analysis that it is very clear that there’s a strong association between the divorce rate in Mississippi and the national level of Mbf. We suggest that the effect may go through previously established channels such as e.g. milk consumption, but we also note that the precise relationship has yet to be elucidated and that further research on this important topic is clearly needed.”
This abstract is awesome as well, but I didn’t write it…
The ‘funny’ part is that I could actually easily imagine papers not too dissimilar to the ones just outlined getting published in scientific journals. Indeed, in terms of the structure I’d claim that many published papers are exactly like this. They do significance testing as well, sure, but hunting down p-values is not much different from hunting down correlations and it’s quite easy to do both. If that’s all you have, you haven’t shown much.
Here’s what I wrote about the book on goodreads:
“This book is almost 25 years old, and this is one of the main reasons why I did not give it five stars. Parts of this book is just amazing, but the fact that I felt that it was necessary to continually look up terms and ideas covered in the book made it slightly less fun to read than it could have been. Some parts of the scientific vocabulary applied throughout the book are frankly outdated, and this aspect reflects not only a change in which words are used but also, more importantly, a change in how people think about these things. That progress has been made since the book was written is a good thing, but it did subtract a little from the overall reading experience that I very often felt that I had to be quite careful about which specific conclusions to accept and which to question. It does not help that some of the main conclusions towards the end of the book seem to have been proven, for lack of a better word, wrong.
But all in all it’s really a very nice book – there’s a lot of fascinating stuff in there.”
A few sample quotes from the book:
“a distinction needs to be made between the two major types of animal fossils — body fossils and trace fossils. Body fossils are either actual parts of the organism’s body (such as a shell or a bone), or impressions of body parts (even if the parts themselves have been dissolved away or otherwise destroyed). The imprint of a feather or leaf or the external surface of a shell are examples of body fossils. [...] Trace fossils are markings in the sediment (usually made while the sediment was still soft) left by the feeding, traveling, or burrowing activities of animals. Familiar examples of trace fossils include tracks and trails made by worms as they plow through sediment looking for food and ingesting sediment. [...] Completely unrelated organisms can make trace fossils which are indistinguishable to paleontologists. Trace fossils are part of the fabric of the sediment, and therefore can be very resistant to destruction by metamorphism of the surrounding rock. Body fossils, on the other hand, are often destroyed by chemical reactions with the surrounding sediment. But body fossils are the only fossil type that can consistently give reliable information about the identity of the organism which left the remains. [...] The worst problem in the search for the oldest animal fossils is mistaken identity. Sedimentary rocks are replete with irregular structures and small scale disturbances or interruptions of the horizontal bedding or layering. Some of these disturbances are caused by organisms, but many are not. [...] Usually a well-preserved and well-formed trace fossil is unquestionably biologic in origin, and all paleontologists would agree that the trace was formed by an animal. Yet it can be difficult to define precisely what it is about a trace fossil that makes it convincingly biogenic (formed by life). [...] A sedimentary structure that resembles, but is in fact not, a trace fossil (or a body fossil, for that matter) is called a pseudofossil. Pseudofossils have plagued the study of Precambrian paleontology because many inorganic sediment disturbances look deceptively like fossils.”
“Convincing trace fossils are known from the late Precambrian, sometimes in association with the soft bodied Ediacaran fossils (Glaessner 1969). These trace fossils are generally simpler, less common, and less diverse than Cambrian trace fossils. There is a significant difference in the complexity and depth of burrowing between Cambrian and Precambrian trace fossils, and it has been argued that the changeover from simple trace fossils to more complex types of traces occurred at more or less the same time as the Cambrian explosion, the first appearance of abundant Cambrian shelly fossils. [...] Even shallow, sediment surface burrows in the Cambrian show a marked change in character over their Precambrian predecessors. [...] something outstanding happened to the abilities of trace-fossil makers across the Precambrian-Cambrian boundary. Animals discovered a large number of ways to effectively use the sediment as a food resource, and also began to move deeper into the substrate for deposit feeding and homebuilding.”
“Seilacher (1984, 1985) recognizes that flattened body shapes maximize surface area for the takeup of oxygen and food dissolved in seawater, and perhaps also for the absorption of light. “Normal” metazoan animals generally have plump, more or less cylindrical, bodies. For very small, thin skinned animals, cells near the body surface can get oxygen and expel waste by simple diffusion across the cell surface membranes. Waste products such as carbon dioxide will be supersaturated inside of the animal’s body, and will tend to migrate out of its cells and into the open environment. The reverse is true for oxygen; it will tend to migrate into the cells because its concentration is greater on the outside than on the inside of an oxygen-respiring animal. Animals such as frogs and salamanders are able to respire (at least in part) in this way. But for most large, cylindrical animals, diffusion respiration will not work because diffusion is ineffective for cells buried deep within the animal’s body. This is a consequence of the fact that as an animal increases its size, its total volume outstrips its surface area by a large margin. [...] metazoans have developed intricate systems of pipework and tubing to deliver nutrient and waste removal services to interior cells. Circulatory systems, digestive tracts, gills, and lungs are all solutions to the problems associated with volume increase.”
“Monoplacophorans [...] are cap-shaped shells distinguished by two rows of muscle scars on the interior of the shell. They were thought extinct until living specimens were dredged from the deep sea and described in the late 1950s. Monoplacophorans have had an unusual history of discovery. They are the only group of animals that has been: (a) described hypothetically before being discovered; (b) found as fossils before being found alive,- and (c) dredged from the depths of the oceans before being collected from shallower marine waters (Pojeta et al. 1987). [...] Rostroconchs are a major, extinct, order of mollusks that first appeared in the earliest Cambrian. Rostroconchs have a shell that is shaped like a clam shell, except that instead of having an organic ligament connecting the two valves, the two halves of a rostroconch shell are fused together to form a single valve. Despite this fusion, larger rostroconchs look very much like clam fossils with valves still articulated, which partly explains why rostroconchs were not recognized as a major, distinct, group until the 1970s. [...] Slightly after the first appearance of rostroconchs, the first true clams or bivalves appear. Clams probably had the same ancestor as the rostroconchs [...]. Instead of keeping the two valves fused as in rostroconchs, clams hinged the valves with articulating teeth and a tough, organic ligament. This evidently proved to be the more successful approach, since bivalve shells now litter the beaches all over the earth, whereas rostroconchs dwindled to extinction in the Permian.”
“Of the earliest Cambrian shelly fossils, many groups are truly problematic in the sense that not only do we have no idea what kind of animal made them, but also we have no clear conception of the function or functions of the skeletal remains. [...] there is an anomalously high proportion of small shelly fossils that do not belong to later phyla. “Living fossils” are creatures alive today that have undergone very little morphologic change for long stretches (sometimes 100 million years or more) of geologic time. Few living fossils remain from the earliest Paleozoic fauna. [...] Many of the groups that were most important in the Cambrian are unimportant or extinct today, for example, the trilobites, the inarticulate brachiopods, hyoliths, monoplacophorans, eocrinoids, the sclerite-bearers, and phosphatic tube-formers. True metazoans were undoubtedly present before the Cambrian, but they were all, with [few] exception[s] [...], soft-bodied. New types of soft-bodied animals appear in the Cambrian as well, but our understanding of these forms is restricted to rare finds of Cambrian soft-bodied fossils, which are even rarer than finds of the Ediacaran fauna.”
I’ll just quote that last part again: “our understanding of these forms is restricted to rare finds of Cambrian soft-bodied fossils”.
They’re talking about the findings of soft-bodied organisms who did not make shells or anything like that which lived more than 500 million years ago. To get a sense of perspective in terms of how long ago this is, have a look at this picture – that’s one guess at what we think the Earth might have looked like back then. In my mind, the fact that we know anything at all about soft-bodied animals living back then is pretty amazing to think about.
I could easily write perhaps four posts about this book, but I’m not going to do that. Instead I have decided for now to limit my coverage here to the stuff above and some links to relevant stuff I looked up while reading the book, which I have posted below – I was surprised how much relevant stuff wikipedia has on related matters, and if you’re curious you should really go have a look at some of those links. I should note that I will probably add another post about the book later on with some more observations from the book – it seems wrong to me to limit coverage of this great book to one post, but there’s no way I can cover all the good stuff in there anyway.
Here are as mentioned some relevant wiki links to the kinds of stuff they talk about in this book – most of the links are in my opinion links to articles of what I’d consider to be a ‘reasonable’ length/quality, and although I have not read all of them I’d note that some of them are quite good:
Ediacara biota (featured).
Cloudinid (‘good article’).
Brachiopod (‘good article’).
Bryozoa (‘good article’).
Global Boundary Stratotype Section and Point (noteworthy in this context is that the Precambrian/Cambrian boundary GSSP at Fortune Head had not been decided upon when this book was written – they have a whole chapter about these and related things).
Manorian glaciation (this is not what it’s called in the book, but that is what they’re talking about anyway).
Timeline of glaciation.
Great Oxygenation Event.
I’m not sure how interesting these lectures will be to people reading along here, but I figured I might as well share them.
I finished the book. Here’s what I wrote on goodreads:
“Parts of the coverage was complete crap and/or dealt with stuff that was not remotely interesting to me, but other parts were actually really good – which makes the book sort of difficult to rate. In the end I decided to settle on a two star rating, even though I consider some of the recommendations made in the book to be far from ‘okay’.”
One of those recommendations is to give narcotics to young children with abnormal brains who don’t behave the way the parents would like them to, even though it’s clear that such approaches only deal with symptoms and do nothing to address actual underlying causes (see below: I haven’t quoted extensively from this part of the book, but you should note that I draw different conclusions than do the authors – as I incidentally do in other areas as well..). Ritalin, Adderall, etc.; subjecting young children with brains which are still developing to these types of pharmacological interventions just make my skin crawl, especially as I consider it more or less beyond doubt that part of what is going on here is simply medicalization of normal behaviours.
I have added some observations from the last half of the book below.
“Over the years there have been many [behavioural] interventions developed for young children with ASD [autism spectrum disorder]. Thus far, research has not demonstrated that any particular intervention approach is better than the others. [...] The amount of empirical support for different treatment approaches varies significantly. However, in general, more empirical support is needed for all approaches, and studies comparing the efficacy and effectiveness of different approaches are sorely lacking. [...] many community-based programs may offer eclectic approaches combining elements of various types of interventions within a single program. Although we are beginning to understand better the types of approaches that seem to be successful for young children with autism and their families, we really have no way of knowing the extent to which eclectic, community-based approaches are successful.”
“Impairment in social communication is one of the diagnostic criteria for ASD; therefore, communication is universally affected in individuals with ASD. Social communication includes many nonverbal aspects such as eye gaze and facial expression. [...] The degree of affectedness of spoken language can vary widely in individuals with ASD. Sixty to 70% of individuals with ASD are low-verbal or nonverbal ["30% of individuals with autism develop fluent spoken language"], with substantial difficulty with the understanding of spoken language and the ability to use it for functional communication (Fombonne, 2005). However, most children with ASD develop some spoken language skills ["50% of individuals with autism develop some usable spoken language"]. In a study of a large sample of nine-year-olds with ASD, fewer than 15% were classified as nonverbal (defined as using fewer than five words per day) [...]. Children with ASD generally have large amounts of immediate or delayed echolalia in which they immediately imitate what someone else says or repetitively use language they have heard from sources such as television, movies, books, or videogames. [...] Echolalia is thought to result because children with ASD have an abnormally large “attentional window” for language, resulting in learning larger “chunks” of language. Multiple words are treated as if they were a single word. [...] Even high-functioning individuals with ASD may have difficulty with understanding the social use of language because they interpret words literally and have difficulty making inferences. Highly verbal individual with ASD may monopolize the conversation by going on and on about a topic that is interesting to them and failing to give their communication partner a turn to speak. Language proficiency or verbal ability is consistently associated with positive outcomes in social and adaptive functioning for children and adults with ASD.”
“The employment outcomes for individuals with high-functioning autism and Asperger’s disorder are reported to be generally much lower than would have been expected on the basis of their intellectual functioning (Howlin, 2004)” [Here is one relevant quote from the book which I found after a brief skim: "despite having IQ scores well within the normal range (and sometimes reaching quite high academic levels) the majority of individuals in both the Asperger and high-functioning autism groups studied by Howlin (2003) had no close friends, remained highly dependent on their families for support and had low employment status" - unfortunately no specific data was included in that section. This article has some more general numbers dealing with all individuals on the autism-spectrum - one relevant quote: "The unemployment rate for autistic people seems to be about 66%, according to data from 2009, compared to about 9% for the general population. Some estimates, like Bell’s, are even higher: 80-85% unemployment."]
“People with ASD require predictability, consistency, and structure”
“Most individuals with ASD are diagnosed in early childhood or by school-age. Some young adults are evaluated for symptoms of high-functioning autism or Asperger’s disorder that have not been identified earlier.” [It seems my experience is not unique... No, I did not think it was...]
“Some common characteristics seen in persons with ASD that may lead to difficulty coping include:
• Challenges in interpreting nonverbal language
• Rigid adherence to rules
• Poor eye-gaze or avoidance of eye contact
• Few facial expressions and trouble understanding the facial expressions of others
• Poor judge of personal space — may stand too close to other students
• Trouble controlling their emotions and anxieties
• Difficulty understanding another person’s perspective or how their own behavior affects others
• Very literal understanding of speech; difficulty in picking up on nuances
• Unusually intense or restricted interests in things (maps, dates, coins, numbers/statistics, train schedules)
• Unusual repetitive behavior, verbal as well as nonverbal (hand flapping, rocking)
• Unusual sensitivity to sensations [...]
• Difficulty with transitions, need for sameness
• Possible aggressive, disruptive, or self-injurious behavior [...]
Situations that often increase anxiety for persons with Asperger’s disorder and lead to difficulty coping include:
• When conversation involves multiple speakers
• Rapid shifting of topics
• Latency of response
• Difficulty in seeking clarification
• Lack of confidence
• Overabundance of irrelevant information”
“Research supporting the use of CBT [cognitive behavioural therapy] in ASD is limited [...] At this stage, it remains difficult to make strong conclusions regarding the efficacy of CBT in people with ASD”
“There are currently three well-accepted theories of social development in ASD. Each will be briefly described below.
Theory of Mind deficit
Definition: Difficulty with both awareness and understanding of another individual’s perspective. This was later referred to as “mind blindness.” Research has shown that most typical children learn this skill by age four, while children with ASD learn this much later, between the ages of nine and 14 years.
The social implications of Theory of Mind deficits (Cumine et al., 1998)
• Difficulty predicting the behavior of others, leading to the avoidance of anxiety-producing situations
• Difficulty reading the intentions of others and understanding motives behind their behavior
• Difficulty explaining their own behavior
• Difficulty understanding emotions, their own and those of others, leading to the appearance of lack of empathy
• Difficulty understanding how their behavior affects how others think or feel, leading to an apparent/perceived lack of conscience or motivation to please others
• Difficulty taking into account what other people know or can be expected to know, leading to the appearance of disorganized cognitive processing
• Inability to read and react to the listener’s level of interest in what is being said
• Inability to anticipate what others might think of one’s actions
• Inability to deceive or understand deception
• Poor sharing of attention
• Lack of understanding of social interactions that enable the initiation and maintenance of social relationships [...]
Central Coherence deficit
Definition: Difficulty drawing multiple sources of social and environmental information together, causing problems with understanding the larger contextual picture.
The social implications of Central Coherence deficits (Cumine, et al., 1998)
• Idiosyncratic focus of attention
• Imposition of the individual’s own perspective onto others’ experiences
• A preference for the known
• Inattentiveness to new tasks
• Difficulty choosing and prioritizing
• Difficulty organizing themselves, materials, and experiences
• Difficulty seeing social connections, thus causing problems with generalizing skills and knowledge
• Lack of compliance with directives that they do not understand [...]
Executive Functioning deficits
Definition: Difficulty with the following executive functioning skills:
• Self monitoring
• The ability to inhibit various social responses
• The ability to express behavioral flexibility
• Processing and expressing information in an organized and fluid manner
The social implications of Executive Functioning deficits (Ozonoff, 1995)
Causes difficulties with the following:
• Perceiving others’ emotions
• Imitation of social behaviors
• Pretend play, which is essential to early learning
• Planning, organizing, and prioritizing
• Starting and stopping activities, behaviors, and thoughts”
“Just as there is a spectrum of autism, there is also a range of social motivation. Some individuals are extremely interested in engaging with their peers, but struggle to appropriately initiate or maintain social connectedness. Others with ASD have very little social motivation. These individuals often report extreme anxiety when engaging with people for a variety of reasons. For example, it may be difficult to predict how the people around them will behave and, therefore, individuals on the spectrum may chose to avoid all anticipated anxiety-provoking social environments. [...] Many people assume that a lack of social initiation and reciprocal communication indicates that individuals with ASD lack the desire to engage in social interaction. On the contrary, many individuals with ASD lack the skills to be successful socially, yet they desire to be a part of social relationships”
“A recent meta-analysis of 55 single-subject research studies revealed that “social skills programs for children with autism are largely ineffective” (Bellini, 2007).”
“no medication has yet been identified that is capable of treating the core features of ASD [...] However, various medications
have been used to treat behavioral symptoms in ASD [...] Antidepressant medications have been used in the treatment of specific psychiatric comorbid disorders in individuals with ASD. They have also been used to target selected symptoms in ASD such as repetitive preoccupations, preseverative behaviors, and social anxiety. [...] The Interactive Autism Network (IAN) reported in 2009 that, out of 5,174 children diagnosed with ASD, 12.2 % of them [were] taking antidepressant medication. [...] [The] reports [on antidepressants] have been anecdotal, small case studies, and small designed studies. Investigation has been limited by small sample size, broad age range, and being uncontrolled. The [single] large double-blind, placebo-controlled study of citalopram in 149 children with ASD [...] found that there was no significant improvement on multiple measures. [...] Antiepileptic medications are typically used for treatment of seizures, which may occur in approximately one third of children with ASD [...] Mood-stabilizing antiepileptic drugs (AEDs) have also been used to treat mood instability, agitation and aggression in ASD. Despite the availability of more than a handful of double-blind trials
(most of which failed to support the use of AEDs), surveys of psychopharmacological use among children and adults with ASD suggest fairly frequent use. [...] It is not uncommon for individuals with ASD to be prescribed more than one psychotropic medication. For example, in the 2005 ASD psychoactive medication survey conducted by Aman and colleagues, 9.8 % of the sample were identified as taking two different drugs; 7.7 % were taking three drugs [...] To date, we are unaware of any randomized controlled trials involving the use of polypharmacy in ASD.”
[I also talked about these aspects in the previous post, but I think they're more clear about the details in the last part of the coverage than they were in the parts on this stuff I covered earlier, so I'll include these observations as well:] “Considering that deficits in communication and social behaviors are inseparable and more accurately considered as a single set of symptoms with contextual and environmental specificities, the DSM-IV-TR’s three domains (communication deficits, social deficits, stereotypic interests and behaviors) become two in DSM-5: (a) social/communication deficits, and (b) fixated interests and repetitive behaviors. The newly proposed diagnosis for ASD requires that both criteria to be completely fulfilled. In addition, the current clinical and research consensus appears to be that Asperger’s disorder is part of ASD. Research currently reflects that Asperger’s disorder is not substantially different from other forms of “high-functioning” autism with good formal language skills and good (at least verbal) IQ.”
I liked this book and I gave it 3 stars on goodreads. Much of it was a review of stuff also covered in Sperling et al. (or elsewhere, see also this blog-post which actually includes some of the same data included in the coverage below), but there was some new stuff as well. I’ve added some relevant observations from the book below – I incidentally do not think most of the stuff included in this post should be at all hard to read for people who do not have diabetes.
“Hypoglycemia is a fact of life for most people with type 1 diabetes [...] The average patient suffers untold numbers of asymptomatic episodes, two episodes of symptomatic hypoglycemia per week (thousands of such episodes over a lifetime of diabetes), and one episode of severe, temporarily disabling hypoglycemia, often with seizure or coma, per year.
Given increased recognition of the magnitude of the problem of iatrogenic hypoglycemia in type 1 diabetes, and practical improvements in the glycemic management of diabetes, over the nearly two decades since the Diabetes Control and Complications Trial (DCCT) was reported in 1993 (DCCT 1993), one might anticipate that hypoglycemia would have become less of a problem. Unfortunately, there is no evidence of that in population-based studies. For example, in their study reported in 2007, the U.K. Hypoglycaemia Study Group (UK Hypo Group 2007) found the incidence of severe hypoglycemia in patients with type 1 diabetes treated with insulin for <5 years to be comparable to that in the Stockholm Diabetes Intervention Study (Reichard and Pihl 1994) (both 110 per 100 patient-years) reported in 1994 and higher than that in the DCCT”
“the U.K. Hypoglycaemia Study Group (UK Hypo Group 2007) found the incidence of severe hypoglycemia in patients with type 1 diabetes treated with insulin for >15 years (320 episodes per 100 patient-years) to be threefold higher than in individuals treated for <5 years [...] Hypoglycemia is particularly common during the night [...] A consistent observation since the DCCT (1991, 1993, 1997) is that more than half of the episodes of hypoglycemia, including severe hypoglycemia, occur during the night (Chico et al. 2003; Guillod et al. 2007). [...] Antidiabetic drugs, mostly insulin, [have been] found to be second only to anticoagulants as a cause of emergency hospitalization for adverse drug events in people >65 years of age, and those visits [are] almost entirely because of hypoglycemia (Budnitz et al. 2011). [...] Overall, hypoglycemia is less frequent in type 2 diabetes than in type 1 diabetes [...] the risk of hypoglycemia is relatively low in the first few years of insulin treatment of type 2 diabetes [...], [however] the risk increases substantially, approaching that in type 1 diabetes, later in the course of type 2 diabetes [...] The prospective, population-based study of Donnelly and colleagues [...] indicates that the overall incidence of hypoglycemia in insulin-treated type 2 diabetes is approximately one-third of that in type 1 diabetes [...] Because the prevalence of type 2 diabetes is ~20-fold greater than that of type 1 diabetes [...] most episodes of iatrogenic hypoglycemia, including severe iatrogenic hypoglycemia, occur in people with type 2 diabetes.”
“The physical morbidity of an episode of hypoglycemia ranges from unpleasant symptoms, such as palpitations, tremulousness, anxiety, sweating, hunger, and paresthesias (Towler et al. 1993), and cognitive impairments with behavioral changes, to seizure, coma, or, rarely, death (Cryer 2007). [...] Hypoglycemia causes functional brain failure that is corrected in the vast majority of instances after the plasma glucose concentration is raised [...] Prolonged, profound hypoglycemia can cause brain death, but that is very rare and most fatal episodes are the result of other mechanisms, presumably cardiac arrhythmias [...] One cardiac mechanism is impaired ventricular repolarization, reflected in a prolonged corrected QT (QTc) interval in the electrocardiogram, which is known to be associated with lethal ventricular arrhythmias. [...] Older estimates were that 2 to 4% of people with type 1 diabetes died from hypoglycemia (Deckert et al. 1978; Tunbridge 1981; Laing et al. 1999). More recent reports in type 1 diabetes include hypoglycemic mortality rates of 4% (Patterson et al. 2007), 6% (DCCT/EDIC 2007), 7% (Feltbower et al. 2008), and 10% (Skrivarhaug et al. 2006).”
“The first defense against falling plasma glucose concentrations is a decrease in pancreatic β-cell insulin secretion. The second defense is an increase in pancreatic α-cell glucagon secretion. The third defense, which becomes critical when glucagon is deficient, is an increase in adrenomedullary epinephrine secretion. If these three physiological defenses fail to abort the episode, lower plasma glucose levels trigger a more intense sympathoadrenal (sympathetic neural as well as adrenomedullary) response that causes symptoms and thus awareness of hypoglycemia that prompts the behavioral defense [which is ingestion of carbohydrates]. [...] All of these defenses are typically compromised in type 1 diabetes and advanced type 2 diabetes [...] compromised glucose counterregulation is the key feature of the pathogenesis of iatrogenic hypoglycemia in type 1 diabetes and advanced type 2 diabetes. Hypoglycemia in diabetes is typically the result of the interplay of relative or absolute therapeutic insulin excess and compromised physiological and behavioral defenses against falling plasma glucose concentrations [...] In fully developed (i.e., C-peptide–negative) type 1 diabetes, circulating insulin levels do not decrease as plasma glucose concentrations decline through or below the physiological range. [...] Furthermore, circulating glucagon levels do not increase as plasma glucose concentrations fall below the physiological range [...] Thus, both the first defense against hypoglycemia — a decrease in insulin levels — and the second defense against hypoglycemia — an increase in glucagon levels — are lost in type 1 diabetes. Therefore, patients with type 1 diabetes are critically dependent on the third defense against hypoglycemia, an increase in epinephrine levels. However, the epinephrine secretory response to hypoglycemia is typically attenuated in type 1 diabetes [...] Through mechanisms yet to be clearly defined but often thought to reside in the brain [...], the glycemic threshold for sympathoadrenal — both adrenomedullary and sympathetic neural — activation is shifted to lower plasma glucose concentrations by recent antecedent hypoglycemia [...], as well as by prior exercise [...] and by sleep [...] The reduced responses to a given level of hypoglycemia cause the clinical syndromes of defective glucose counterregulation and hypoglycemia unawareness [which is] impairment or even complete loss of the warning, largely neurogenic symptoms that previously prompted the behavioral defense, the ingestion of carbohydrates. Hypoglycemia unawareness—or more precisely impaired awareness of hypoglycemia—is common in type 1 diabetes [...] Compared with patients with type 1 diabetes who have absent insulin and glucagon responses but have normal epinephrine responses, patients with absent insulin and glucagon responses and reduced epinephrine responses have been shown to be at 25-fold [...] or greater [...] increased risk for severe iatrogenic hypoglycemia during aggressive glycemic therapy [...] At least in part because of the clinical importance of hypoglycemia in people with diabetes, studies of the molecular and cellular physiology and pathophysiology of the CNS [central nervous system]-mediated neuroendocrine, including sympathoadrenal, responses to falling plasma glucose concentrations are an increasingly active area of fundamental neuroscience research.”
“The risk factors for hypoglycemia in people with diabetes [...] follow directly from the pathophysiology of glucose counterregulation in diabetes [...]. The principle is that iatrogenic hypoglycemia in type 1 diabetes and advanced type 2 diabetes is typically the result of the interplay of relative or absolute therapeutic insulin excess and compromised physiological and behavioral defenses against falling plasma glucose concentrations, i.e., hypoglycemia-associated autonomic failure (HAAF) in diabetes.
People with diabetes are not immune to hypoglycemia caused by mechanisms other than the treatment of their diabetes [...]. Those include 1) an array of drugs [...] including alcohol, 2) critical illnesses such as renal, hepatic or cardiac failure, sepsis, or inanition, 3) hormone deficiency states such as adrenocortical failure, 4) nonislet tumor hypoglycemia, 5) endogenous hyperinsulinism, and 6) accidental, surreptitious, or even malicious hypoglycemia. However, aside from drug effects, those mechanisms are very uncommon. [...] if all other factors are the same, patients treated to lower, compared with higher, A1C levels are at higher risk for hypoglycemia. Stated differently, studies with a control group treated to a higher A1C level consistently report higher rates of hypoglycemia in the group treated to a lower A1C level in type 1 diabetes [...] and type 2 diabetes [...] lower mean plasma glucose concentrations and greater plasma glucose variability are also associated with a higher risk of hypoglycemia [...] Improved glycemic control before and during pregnancy is particularly important in the short term because it improves pregnancy outcomes in women with type 1 diabetes. But, it increases the frequency of hypoglycemia substantially [...] In one series, 45% of 108 women with type 1 diabetes suffered severe hypoglycemia during their pregnancies; compared with a prepregnancy rate of 110 per 100 patient-years, the incidence was the equivalent of 530, 240, and 50 episodes per 100 patient-years in the first, second, and third trimesters, respectively (Neilsen et al. 2008).”
“Based on a systematic review and meta-analysis of randomized controlled trials published up to 2012, Yeh et al. (2012) concluded that CSII [Continuous subcutaneous insulin infusion] (compared with MDI [multiple daily injection]), real-time CGM [continuous glucose monitoring] (compared with SMPG [self-monitored plasma glucose]), and sensor-augmented CSII (compared with MDI and SMPG) had not been shown to reduce the incidence of severe hypoglycemia in type 1 or type 2 diabetes. [...] these technologies may, or may not, be shown to reduce the frequency of hypoglycemia in the future.”
I recently realized that I had actually never read a textbook like this on this topic. I did get some reading materials back when I got diagnosed so it’s not like I’ve never read anything about the stuff (and there was a lot of verbal information back then as well), but as mentioned I haven’t read a text on the topic. It was actually due to the old reading materials in question that I ended up deciding to read this book; I was looking for some other stuff the other day and I ended up perusing some of these materials (which I hadn’t seen in years), and I figured I should probably go read a book on the topic. Now I am.
The book is sort of okay. There are various complaints one might make, the most important one of which in the context of me reading the book is perhaps that children with autism-spectrum disorders grow up and become adults, and adults prefer to read chapters about adult stuff, not stuff about e.g. how to teach the preschooler with the diagnosis social skills. I’ve read roughly half the book at this point, and there’s not in my opinion been enough stuff about the adult setting at this point. Another complaint is that I as usual am somewhat mistrustful when guys like these talk about the conclusions to be drawn from some types of empirical evidence; the coverage has in my opinion been of a decidedly mixed quality in terms of the stuff dealing with behavioural interventions, in the sense that they on the one hand at one point reasonably frankly acknowledge that the evidence is sparse and of poor quality, and on the other hand later on seem to become very excited about a longitudinal study and start drawing big conclusions from that single study – which would be sort of fine, I like longitudinal studies, if not for the fact that the study was based on 6 (!) individuals. Similar things happen elsewhere in that part of the coverage – potential power issues are never mentioned in the book, at least they have not been so far – you find yourself reading about a ‘seminal’ study on 19 individuals, and then you move on to their comments about how there have been several other studies supporting those findings, including a study looking closer at 9 of the individuals involved in the original study. Sometimes it’s hard to know what to think, especially in the situations where the only people evaluating the interventions are the people who came up with them in the first place – this doesn’t seem like a particularly smart way to conduct business, though in some parts of psychology it seems to be more or less standard practice.
The stuff on behavioural interventions has in my opinion been some of the weakest stuff in the book so far, which is why I have not talked about this stuff in my coverage below. Some of the proposed interventions are incredibly expensive, and there’s probably a good reason why such things are usually not covered by public health care systems, however the authors do not really seem to consider economic aspects to be all that important, except to the extent that economic factors unfortunately restrict access to all these nice things we could do for these children; they’re aware that parents may not be able to afford the treatment options which are recommended at this point by people who would benefit from these treatment options being more widely used, but they don’t seem to be aware of the existence of things like cost-effectiveness analyses. It’s one thing to argue that there may be developmental gains to be achieved by early childhood interventions (I’ve previously done work in educational economics and I can tell you that it is a common finding in this literature that you can improve outcomes by throwing lots of money and attention after young children – a finding which should perhaps not be super surprising..), it’s quite another thing to argue that the specific interventions comtemplated are cost-effective. To be fair, cost-effectiveness is incredibly hard to evaluate when you’re contemplating evaluating interventions which may have effects lasting basically the rest of the life of the individual and the intervention is supposed to take place during the first years of a child’s life, but in my opinion you sort of need to at least pretend to try to address this aspect somehow; if you don’t, you’re quite likely to end up in a situation where it seems as if you’re acting as if there’s no (societal) budget constraint, and the authors of this book seem to me to move very close to this position at various points in the coverage.
I knew very little (nothing?) about autism-spectrum disorders before I got the diagnosis – I got diagnosed very late, in my adulthood. It’s sort of funny how you can miss important stuff like this without even knowing, and in a way it relates to a point which came up in my recent post on ethics, specifically the point that ‘bad’ people tend to think they are ‘good’ people, or at least no worse than average. How much do you really know about how good other people are at, say, interpreting nonverbal social signals? Would withdrawal from social interaction make the comparison easier or harder? If you don’t really engage in the normal patterns of non-verbal information exchanges, e.g. eye contact exchanges, during social situations, how are you to know that important information is contained in such exchanges? Individuals seem to make assumptions about these things to a large extent based on what they know themselves (about themselves?), and if you have limitations in these areas it may be difficult to figure out that this is the case; another apt analogy might be children who need glasses early on in their lives – we screen for vision impairment in young children in part because young children don’t know, and may never on their own ever realize, that the world is not supposed to be blurry, and that you’re actually supposed to be able to see all the letters written down on the blackboard.
I thought I should make one thing clear before moving on to the main text, a point particularly relevant considering the comic which I decided to start out with; which is that incompetence should not be equated with/interpreted as malicious intent. It seems to me that many people conceive of people with autism-spectrum disorders as inconsiderate jerks who don’t have a clue – I’ve seen quite smart people state relatively similar things in the past. I dislike the ‘jerk’-model because I try to be thoughtful and considerate when interacting with others, and when these people think that way I feel that they’re devaluing the work I put into this stuff. One important problem which is sort of hard to figure out how to deal with is that I’m well aware that the more thoughtful and considerate I am (…or is it: ‘try to be?’) during social encounters, the more taxing the social interactions may become, and taxing social interactions lead to social isolation and withdrawal. Coming up with a good equilibrium level of effort is not an easy task, and I think one needs to address aspects like these before making strong judgments about things like the jerkishness of specific behaviours. In a way people with social anxiety have similar concerns which other people also cannot observe (in this case it would be excessive amounts of thinking during social situations about whether they are doing stuff right now that may mean that they’ll get rejected by others, which then leads to oversensitivity to clues of rejection, leading to social avoidance because of perceived rejection). Of course people with autism-spectrum disorders may be anxious as well, as also mentioned in the coverage below. The level of self-awareness varies a lot in people with autism-spectrum disorders, but people with relatively high levels of self-awareness may certainly face some constraints and tradeoffs which are not immediately obvious to the outsider and which may actually be assumed by neurotypicals to be absent, given the diagnosis.
The textbook answered one question I’d been thinking about a few times without ever worrying enough about it to actually seek out an answer, which is the question of what the recent diagnostic changes might mean, given that I have a diagnosis which by now has been ‘retired’. It turns out that I was diagnosed with what in the textbook are considered to be the ‘gold-standard tools’, which means that this remark related to the recent diagnostic changes that have taken place seems to answer the question: “The DSM 5 noted that “Individuals with a well-established DSM-IV diagnosis of” “Asperger’s disorder” “should be given the diagnosis of autism spectrum disorder””. I’m not going to ‘ask’ for a ‘new’ diagnosis (/a ‘translation’ of my diagnosis) (and quite aside from what other people like to call this stuff, I like the word ‘eccentric’ a lot better than the word ‘autistic’…), but it’s nice to know which recommendations are being made in this area. Some of the quotes below also relate a bit to these aspects.
I’ve added some quotes from the book below.
“Autism is a developmental neurobiological disorder characterized by severe and pervasive impairments in reciprocal social interaction skills and communication skills (verbal and nonverbal), and by restricted, repetitive, and stereotyped behavior, interests, and activities. [...] Autism and autistic stem from the Greek word autos, meaning “self.” The term autism originally referred to a basic disturbance, an extreme withdrawal of oneself from social life, or aloneness. [...] The critical point in the scientific history of autism was in 1943, when Leo Kanner published Autistic Disturbances of Affective Conduct, a groundbreaking paper that described the symptoms of 11 children presenting similar behaviors that had not been previously recognized. [...] Based on Kanner’s terminology, autism was considered for years a psychosis, and child psychiatrists were using “childhood schizophrenia” and “child psychosis” in autism as “interchangeable diagnoses.” [...] A parallel line of inquiry to that of Kanner and Eisenberg is represented by the work of Hans Asperger.”
“In Autism and Pervasive Developmental Disorders, Fred Volkmar and Catherine Lord (2004) distinguished important points of differentiation and similarities between Kanner’s and Asperger’s descriptions. [...] In concluding their comparison of Kanner’s and Asperger’s descriptions, Volkmar and Lord pondered whether, despite the relevant differences, it was “scientifically and clinically helpful to classify individuals with these traits into separate categories of autism or Asperger’s disorder, or whether it would be better to treat them as parts of a greater continuum.” The utility of the “greater continuum” has led to the category of autism spectrum disorder to be proposed for DSM-5. [...] As a result of [various findings] and the lack of reliability in the community in making distinctions among the ASDs [Autism-Spectrum Disorders] [for example: "Variations in clinical severity among ASD cases are not valid indices of differences in pathophysiology or etiology"], the Fifth Edition of the Diagnostic and Statistical Manual (DSM-5) proposes to collapse all of these clinical syndromes into a single diagnosis of “autism spectrum disorder.” Although this revision is appropriate for community diagnosis, and thus the allocation of clinical and support services, research studies will continue to rely on research diagnostic instruments like the Autism Diagnostic Interview (ADI) and the Autism Diagnostic Observation Schedule (ADOS) [these were both part of my work-up, US] to make categorical distinctions between “autism and not autism” and “autism and autism spectrum disorder” (which includes Asperger’s disorder and PDDNOS [Pervasive Developmental Disorder Not Otherwise Specified]). These distinctions have played a vital role in advancing our understanding of the behavioral and neural profile of ASD over the past two decades”
“Recent studies and reports from the Centers for Disease Control [...] have shown an increase in the prevalence of children diagnosed with an ASD to one in 110 [...] The reported increase is thought to be attributable to several factors. First, there have been changes in diagnostic practices [...] Second, there is greater public awareness of ASD and more case-finding [...] Finally, there has been a tendency to diagnose many children with intellectual disability as PDD. [...] no evidence currently exists to support any association between ASD and a specific environmental exposure. [...] Numerous studies have failed to demonstrate a causal relationship between immunizations, particularly thimerosal-containing vaccines, and ASD [...] The CDC (2009) reports the median age for a diagnosis of ASD to be between 4.5 and 5.5 years. [...] the ASD diagnosis is four times more common for boys than in girls.”
“The essential features of Asperger’s disorder are severe and sustained impairment in social interaction (criterion A); and the development of restricted, repetitive patterns of behavior, interests, and activities (criterion B); which must cause clinically significant impairment in functioning (criterion C). There are no clinically significant delays in language (criterion D) or cognitive development (criterion E).”
“ASD (excluding Asperger’s disorder) has early language and communication impairment. [...] almost two thirds of individuals with ASD also have ID [intellectual disability] [...] 15%–20 % of cases of ASD are now linked to genetic or
chromosomal abnormalities [...] Fragile X Syndrome (FXS) [is] the most common identifiable cause of ASD and the most common inheritable cause of ID. [...] Thirty percent of individuals with FXS demonstrate characteristics of ASD.” [In some other conditions penetrance is even higher - examples that could be mentioned are 15q duplication and Timothy Syndrome, but prevalence is lower in these cases and especially in the latter case some might argue that the autism is the least of that child's problems..]
“Challenging behaviors [in individuals with ASD] may reflect pain that is not communicated verbally [...] Challenging behaviors may [also] reflect the child’s difficulty with communication, changes, new places, new situations, new experiences, new sounds, new smells, and new people” [I wonder if you can spot a pattern here in terms of what these children (/people) don't like? I think an important distinction here is to be made between curiosity and the desire to try out new things. I'm often, hesitant, about trying out new things, yet I'm also quite curious about a lot of things. Be careful which categories you apply here and how they may impact your thinking... In a related vein:] “Insistence on sameness and difficulty with change are common symptoms of an ASD. These behaviors should not typically be considered a behavior done to exert control over others.”
“Psychiatric comorbidity is now acknowledged as quite common in ASD [and] psychiatric comorbidity increases the level of impairment [...] There is a handful of questionnaires [aiming at spotting psychiatric comorbidities] that have been developed specifically for use in developmentally disordered or ASD populations. [...] none of the measures has the level of research support possessed by questionnaires used in other branches of psychiatry. The vast majority of these instruments have just one study behind their development, or have been studied only by the developer of the instrument. [...] one of the main challenges in diagnosing psychiatric disorders in individuals with ASD is the possibility of different presenting symptoms and difficulty in differentiating impairment related to the underlying ASD from impairment due to a separate condition. [...] While we do not want to miss true comorbid diagnoses, over-diagnosing comorbidity can be equally harmful. [...] Mood disorders, such as depression and bipolar disorder, in ASD have recently begun to receive a great deal of attention [...] there are many potential psychosocial stressors that could be possible triggers. For example, higher-functioning individuals who are aware of their deficits and badly desire friends, but lack success in this area, are at particular risk. [...] Although there is little research on emotion regulation in ASD, there is clear evidence that emotion regulation is highly variable and often problematic in this population, regardless of psychiatric comorbidity [...] Therefore, particularly for mood disorders, it is imperative to consider baseline functioning and not over-diagnose mood disorders when the concern may be more temperamental in nature.”
“Anxiety is considered by some to be the most common comorbid psychiatric concern in ASD [...]. The DSM-IV-TR notes that individuals with ASD might have unusual fear reactions, and it is also not uncommon for there to be a general tendency toward anxiety for many individuals with ASD. [...] There are many aspects of having an ASD that may lead to this increased risk for anxiety, to the degree that some consider anxiety and the social impairment in ASD to have a bidirectional relationship [...] An increase in self-awareness is considered a risk factor for higher anxiety; therefore, anxiety is typically thought of as more common among individuals with ASD who have higher intellectual abilities, and older children, adolescents, and adults.”
“autism may be conceptualized as a disorder of complex information processing resulting from disordered development of the connectivity of cortical systems (e.g., failure of cortical systems specialization) [...] approximately 15%–20% of infants with an older sibling diagnosed with autism will ultimately be diagnosable with ASD by three to four years of age. [...] [Findings from longitudinal sibling studies] do not support the view that autism is primarily a social-communicative disorder and instead suggest that autism disrupts multiple aspects of development rather simultaneously. [...] When both elementary and higher-order abilities in many domains are assessed, it becomes evident that deficits exist in several domains not considered to be integral parts of the autism syndrome, including aspects of the sensory-perceptual, motor, and memory domains. Furthermore, there are enhanced skills and impaired abilities within the same domains as deficits (e.g., memory, language, abstraction). [...] Causal explanations for ASD must account for the comprehensive pattern of both deficits and intact aspects of the disorder both within and across multiple domains. [...] There is no single primary deficit or triad of deficits, brain regions, or neural systems causing autism. [...] Rather, autism broadly affects many abilities at the same time and systematically from its earliest presentation and throughout life. [...] This pattern [can] be characterized overall as reflecting a disorder of complex or integrative information processing, which results from altered development of cerebral cortical connectivity in ASD. [...] Just as the infant sibling studies have clearly demonstrated, studies of children and adults with autism have also demonstrated a broad but selective profile of deficits and intact or enhanced abilities that all reflect a relationship to information-processing demands. [...] it is likely that genes affecting signaling pathways that regulate neuronal organization are strongly implicated in the etiology of autism.”
“ASD is now conceptualized as a developmental neurobiological disorder affecting elaboration of the forebrain circuitry that underlies the abilities most unique to human beings. [...] Wiring the brain requires that neurons proliferate, acquire the correct identities, migrate to the appropriate locations, extend axons, and make guidance decisions with a high degree of spatial and temporal fidelity. Converging evidence indicates that more than one of these processes may be altered in various combinations to produce the heterogeneous phenotypes observed in ASD. [...] Studies examining head circumference (HC) and brain volume (BV) in individuals with ASD have demonstrated altered brain growth trajectories across the lifespan. [...]
• Up to 70 % of infants with ASD exhibit abnormally accelerated brain growth in the first year of life. Approximately 20% to 25% of infants in this subset actually meet formal criteria for macrocephaly (i.e., HC of 2.0 standard deviations above the mean) in the first year.
• BV is significantly larger by two to four years of life, and some children meet criteria for megalencephaly (i.e., BV 2.5 S.D. above mean).
• The first two years of life are usually a period of rapid brain growth in infants as neurons undergo significant postnatal growth in cell size and elaboration (actually overproduction) of axons, synapses, and dendrites. It is possible that this process is exaggerated somehow in at least a subset of ASD.
• Whatever the neurobiological basis, abnormal growth rates in ASD tend to decline significantly after the initial acceleration, causing an apparent “normalization” of BV by adolescence or early adulthood. [...]
At the time of maximal brain growth in very early childhood, cerebral gray matter (GM) and white matter (WM) are both increased [...] The frontal cortical GM and WM show the most enlargement, followed by the temporal lobe GM and WM and the parietal GM.”
“Thus far, [fMRI and fcMRI] studies have identified underconnectivity with the frontal cortex as a specific characteristic of the altered connectivity in autism, and this characteristic is present across the same wide range of domains of complex information processing that are affected in the disorder, including social, language, executive, and motor processes. [...] measures of functional connectivity between specific areas have been shown to reliably predict the degree of impairment in specific domains among those diagnosed with autism. For instance, individuals with poorer social functioning measured by the ADI-R show lower functional connectivity between frontal and parietal cortices. These findings gave rise to the underconnectivity theory in autism, which now has sufficient support that it is accepted as a central feature of the pathophysiology of autism [...] Results from these studies are consistent with the notion that autism is a disorder of distributed neural systems (e.g., the connections between structures rather than the structures themselves). [...] Diffusion-weighted imaging measures the direction and speed of microscopic water movement in the brain, allowing inferences about the microstructure of the tissue that constrains such movement. These studies have consistently found reduced structural integrity of white matter in adults with ASD, indicating reduced anatomical connectivity [...] like measures of functional connectivity, measures of anatomical connectivity derived from diffusion imaging have been shown to reliably predict symptom severity among individuals with autism.”
“In thinking about the genetic basis of autism, it is important to contrast syndromic (or complex) and non-syndromic (or idiopathic/essential) ASD. [...] Syndromic ASD includes identifiable autism syndromes with known genetic causes, such as tuberous sclerosis complex, Fragile- X syndrome, Rett syndrome, and Smith-Magenis syndrome.
• Syndromic ASD is associated with a relatively higher propensity for dysmorphic features (including anatomical brain abnormalities), intellectual disability (ID), seizures, and female sex (sex ratios are almost equal).
• Syndromic ASD is also associated with a higher frequency of chromosomal abnormalities in general, many of which have been identified [...]. However, it is not yet clear for many of these syndromes which features are typical of autism and which are unique.
Non-syndromic ASD is also called idiopathic autism and consists of cases with and without identifiable micro-deletions or duplications to the DNA. [...] individuals with idiopathic ASD are more likely to be male, with sex ratios approximately 1:4 (F:M) but approaching 1:7 in milder cases.”
“Overall, approximately 10 % of children being evaluated for ASD are found to have an identified medical condition with a known genetic lesion such as Fragile X or tuberous sclerosis. An additional 10 % or more have an identifiable chromosomal structural abnormality or copy number variation associated with ASD. [...] Recent genome-wide scans using microarray technology have demonstrated a substantial role for small chromosomal deletions or duplications (i.e., copy number variation or CNV) in the etiology of ASD. [...] There is [however still] considerable debate concerning the genetic architecture underlying [...] the majority of idiopathic autism. Arguments can be made for either the effects of single, but rare Mendelian causes (for which documented CNVs are presumably the tip of the iceberg) or the interaction of numerous common, but low-risk alleles. Genetic linkage and association studies have been traditionally employed to address the latter model, but have failed to consistently identify susceptibility loci.” [An important point I should perhaps make before finishing this post is that if incidence/prevalence of a condition is increasing fast in a population, which seems to be the case here, such an increase is in general considered to be unlikely to only be the result of genetic changes at the population level - that type of pattern is usually indicative of environmental factors playing an important role. It may well be that the 'average cause' is different from the 'marginal cause', and that it may be a good idea to be careful in terms of which tools to use to explain base rates and growth rates. It might be argued that increased assortative mating among nerds in Silicon Valley has increased incidence locally (I'm sure this might be argued as I'm quite sure I've seen this exact argument before...) and I'm not saying this may not be the case, but if close to one percent of the American population get diagnosed, what goes on in Silicon Valley probably isn't super relevant one way or the other - only roughly 1% of the population live in that area altogether. Even if you were to argue that a similar process is going on everywhere else in the country, it sort of strains belief that 'something else' is not going on as well].
“The system of normative ethics which I am here concerned to defend is [...] act-utilitarianism. [...] Roughly speaking, act-utilitarianism is the view that the rightness or wrongness of an action depends only on the total goodness or badness of its consequences, i.e. on the effect of the action on the welfare of all human beings (or perhaps all sentient beings).”
The book is simple: The first half tells you why (act-)utilitarianism is great, and the second half tells you why utilitarianism sucks.
I’ve been unsure how to blog this book, and as I’m writing this I have still yet to decide what’s the best approach. It probably makes sense to start out with some general remarks. The first general remark is that I liked Smart’s half (the first half) better than Bernard Williams’ half, and I did that to a significant degree because it is in my opinion much easier to read and understand than especially the first half of the second half of the book – regardless of the merits of the arguments, I simply think J.C.C. Smart is a much better writer than is Bernard Williams. There are some important points hidden away in Williams’ account, but in my opinion he waffles so much you sometimes don’t really care one way or the other. Trained philosophers may disagree, but I’m not used to read philosophical texts and stuff like that is part of the reason.
The second general remark is that this book reminded me why I don’t really care about moral philosophy in the first place. Moral judgments don’t really interest me very much. Coming up with elaborate systems (or, in some cases, not-so-elaborate systems) of thought which allows some action patterns and disallows others, evaluated by considering how these systems perform in hypothetical scenarios which may or may not ever happen to anyone you know (“the common methodology of testing general ethical principles by seeing how they square with our felings in particular instances”, as Smart puts it in the book..), or perhaps evaluated by figuring out if the systems are self-consistent or not, simply seems to me a strange approach to how to identify good decision(/justification) rules.
I have come to realize that my opinion of the coverage – but perhaps especially Smart’s account – is influenced by some thoughts I had a while back and discussed with a friend last week. I was at the time considering blogging some of those thoughts, but I decided against it. Anyway these thoughts relate to how knowledge may shape how you think about stuff; this specific topic is actually covered in the book, though from a very different angle. I hold to the view that thinking which is more or less unconstrained by knowledge will most often be a very inferior type of thinking to the kind of (‘directed…’ was the word my friend used, a good word in this context I think) thinking which is constrained by data. What I came to realize along the way was that what I was really missing in this book was some actual knowledge about how humans behave, some understanding of why people behave the way they do, and how such aspects intersect both with which types of behaviours may in theory be ‘permissible’ or not, and why people think the way they do about the thoughts they have and the actions they engage in. We know some stuff about those kinds of things, books have been written about such things – for a neat little book on related topics, see Tavris & Aronson’s account. Smart mentions in his part of the book that: “If [...] act-utilitarianism were put forward as a descriptive systematization of how ordinary men, or even we ourselves in our unreflective and uncritical moments, actually think about ethics, then it is of course easy to refute [...] [But] it is precisely because a doctrine is false as description and as explanation that it becomes important as a possible recommendation.”
‘People don’t seem to make moral judgments the way I’d like them to, but if they did the world would be a better place’ may be true or it may not be true, but when your argument is founded on logic and you don’t really have good data to suggest that this approach to making moral judgments actually leads to better ‘moral outcomes’ (whatever that may mean – but then again the proponent of such a view is free to define his terms and then argue why his system is better, as that is how people do in other areas, so this caveat may not be important) then I don’t really think you have a very strong case. People (well, some people – it’s probably mostly other economists…) occasionally criticize economists harshly when they fail to take general equilibrium effects into account when making policy recommendations based on partial-equilibrium analyses (‘the employment effects of a job programme involving 500 people may be very different from the employment effects of the same type of job programme scaled up so that it involves 50.000 people’); what these guys are doing is in some sense even worse, as they’re really arguing without any data at all – “I think this”, “I think that”.
I’m sure this kind of stuff related to things like how you approach the topic of meta-ethics and where people stand on things like the non-cognitivist approach Smart talks about in his introduction, but I’m not well-versed in such matters. What I will say is that given what I know about many other topics (primatology, (/behavioural) economics, medicine, psychology, evolutionary biology, anthropology, …), I think the sort of approach these guys have to all of this stuff is not very ‘useful'; in my opinion you need to know and understand a lot of stuff about why people behave the way they do in order to even be in a position where you are justified in having any sort of opinion about how to evaluate the things people do or think in the first place. And these guys have not convinced me they know a lot about things aside from the sort of things philosophers know about this sort of stuff. I’ll go into more detail about these aspects below, but before doing that I would point out that another way to approach moral questions from the one they apply would be to identify/define specific outcomes, behaviours or motivations of interest, analyze variation in data on these variables, and figure out if there are some useful patterns to be found. Perhaps people who commit murder have things in common, and perhaps some of the variables they have in common can be addressed/modified by policies and/or behavioural change at the individual level. I’m not a philosopher, this is more along the lines of ‘where I’m coming from’.
In terms of ‘the stuff I know’ I alluded to above, a few examples are probably in order to get at some of the issues:
i. “Research on parent-child conflict during the first decade of life most often has focused on emotional outbursts, such as temper tantrums [...] and coercive behavior of children toward other family members as evidence of conflict. The frequency of such behavior begins to decline during early childhood and continues to do so during middle childhood [...] The frequency of episodes during which parents discipline their children also decreases between the ages of three and nine [...] research on conflict management in this period has focused on the relative effectiveness of various parental strategies for gaining compliance and managing negative behaviors.” (link)
ii. “The result of an interview is usually a decision. Ideally this process involves collecting, evaluating and integrating specific salient information into a logical algorithm that has shown to be predictive. However, there is an academic literature on impression formation that has examined experimentally how precisely people select particular pieces of information. Studies looking at the process in selection interviews have shown all too often how interviewers may make their minds up before the interview even occurs (based on the application form or CV of the candidate), or that they make up their minds too quickly based on first impression (superficial data) or their own personal implicit theories of personality. Equally, they overweigh or overemphasise negative information or bias information not in line with the algorithm they use.” (link)
iii. “many doors in life are opened or closed to you as a function of how your personality is perceived. Someone who thinks you are cold will not date you, someone who thinks you are uncooperative will not hire you, and someone who thinks you are dishonest will not lend you money. This will be the case regardless of how warm, cooperative, or honest you might really be. [...] a long tradition of research on expectancy effects shows that to a small but important degree, people have a way of living up, or down, to the impressions others have of them. [...] judges use stereotypes as an important basis for their judgment only when they have little information about the target. [...] When you know someone well you can base your judgments on what you have seen. When you have little information, you fall back on stereotypes and self-knowledge.” (link)
iv. “The need for closure (NFC) has been defined as a desire for a definite answer to a question, as opposed to uncertainty, confusion, or ambiguity [...] People exhibit stable personal differences in the degree to which they value closure. Some people may form definitive, and perhaps extreme, opinions regardless of the situation, whereas others may resist making decisions even in the safest environments. [...] Taken together, the research on intrapersonal processes demonstrates that people who are high in NFC seek less information, generate fewer hypotheses, and rely on early, initial information when making judgments. [...] The manner in which people interpret their own and other people’s behaviors and outcomes is linked predictably with their self-esteem and self-concepts. [...] a large body of research on attribution processes shows that people high in self-esteem take credit for their successes and blame their failures on external factors [...] In contrast, people low in self-esteem are less inclined to take credit for their successes and more inclined to assume responsibility for their failures” (link)
v. “All addictive drugs are subjectively rewarding, reinforcing and pleasurable . Laboratory animals volitionally self- administer them , just as humans do. Furthermore, the rank order of appetitiveness in animals parallels the rank order of appetitiveness in humans [...] it is relatively easy to selectively breed laboratory animals for the behavioral phenotype of drug-seeking behavior (the behavioral phenotype breeds true after about 15 generations in laboratory rodents)” (link)
vi. “Psychological autopsy studies in the West have consistently demonstrated strong associations between suicide and mental disorder, reporting that 90% of people who die by suicide have one or more diagnosable mental illness” (link)
vii. “Evolutionary explanations are recursive. Individual behavior results from an interaction of inherited attributes and environmental contingencies. In most species, genes are the main inherited attributes, but inherited cultural information is also important for humans. Individuals with different inherited attributes may develop different behaviors in the same environment. Every generation, evolutionary processes — natural selection is the prototype — impose environmental effects on individuals as they live their lives. Cumulated over the whole population, these effects change the pool of inherited information, so that the inherited attributes of individuals in the next generation differ, usually subtly, from the attributes in the previous generation. [...] Culture is a system of inheritance. We acquire behavior by imitating other individuals much as we get our genes from our parents. A fancy capacity for high-fidelity imitation is one of the most important derived characters distinguishing us from our primate relatives [...] We are also an unusually docile animal (Simon 1990) and unusually sensitive to expressions of approval and disapproval by parents and others (Baum 1994). Thus parents, teachers, and peers can rapidly, easily, and accurately shape our behavior compared to training other animals using more expensive material rewards and punishments.” (link)
viii. “When two people produce entirely different memories of the same event, observers usually assume that one of them is lying. [...] But most of us, most of the time, are neither telling the whole truth nor intentionally deceiving. We aren’t lying; we are self-justifying. All of us, as we tell our stories, add details and omit inconvenient facts [...] History is written by the victors, and when we write our own histories, we do so just as the conquerors of nations do: to justify our actions and make us look and feel good about ourselves and what we did or what we failed to do. If mistakes were made, memory helps us remember that they were made by someone else. If we were there, we were just innocent bystanders. [...] We remember the central events of our life stories. But when we do misremember, our mistakes aren’t random. The everyday, dissonance-reducing distortions of memory help us make sense of the world and our place in it, protecting our decisions and beliefs. The distortion is even more powerful when it is motivated by the need to keep our self-concept consistent; by the wish to be right; by the need to preserve self-esteem; by the need to excuse failures or bad decisions; or by the need to find an explanation, preferably one safely in the past” (link)
ix. “The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people—inferring who we are and what we like from our actions. [...] We may not always know exactly why we do what we do, choose what we choose, or feel what we feel. But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings.” (link)
One key point is that people are different, in all sorts of ways. They’re systematically different in terms of behavioural dispositions, and some behaviours may to a great extent be simply the result of biological factors (drug abuse is certainly relevant here, and suicide probably is as well. These are relevant to the discussion not just because there are relevant differences in behavioural dispositions, but also because people tend to think they ought to have views about the ethics of these behaviours). If individuals are different and such differences are important in terms of which actions the individuals are likely to engage in, it might be natural to suggest that taking such differences into account may be an important component in the evaluation of the ethical properties of a given behaviour. That was actually not the point I was going for, as I’m not sure I really care a great deal about how moral systems should look like. However it does seem to me that people are taking many individual-level differences into account, to varying degrees, when making moral jugments, whether or not they ‘should’.
The basic point is that people are different, and so they have different moral systems. This is not a new idea of mine, and I’ve previously touched upon factors of relevance in this analysis; see for example this post (key point: “If you’re better able to handle complexity you’re able to make use of more complex moral algorithms.”). Another way to think about it, which also relates to the quotes above, would be to say that as people use their moral systems repeatedly to justify their own behaviours and as people behave in different ways, it’s really beyond doubt that people have different moral systems which incorporate different stuff. When looked at from that point of view, utilitarianism is really just one system (or family of systems), which appeal to some specific people due to specific reasons related to why those people are the way they are and behave the way they do. This is my observation, not an observation made in the book, but Williams does touch very briefly upon related aspects, in the sense that he talks about “the spirit of utilitarianism, and [...] its demand for a rational, decideable, empirically based, and unmysterious set of values”, and at the end of his contribution charges the system with “simplemindedness”.
The social dimension alluded to in the quotes above seems relevant as well. Individuals are different from each other, but so are different groups of individuals (see e.g. vii). Groups are particularly important because stuff like social feedback systems are really important determinants of individual behaviours, and important determinants in terms of how individuals approach various questions and actions. For example people may act differently when they’re in a group than they do when they’re on their own – ethicists may or may not agree that such differences are relevant to the ethical judgment of behaviour, but there’s a potential variable lurking here which some people may consider to be important. Another related example might be that some people may search out social environments that contain people who are likely to approve of their behaviours and avoid social environments including people who do not – they may, in short, behave in a manner which may make enforcement of ethical systems more difficult. Some people may also respond differently to social feedback than do other people. If some people do consider such variables to be important when making moral judgments, and you’re planning to discuss ethics with such people, then you probably need to have some knowledge about how groups of people work, and how social aspects impact behaviour (i.e. you need to know some stuff about social psychology, sociology and related fields).
One argument here which is implicit is that if you have a moral system which makes judgments without regards to the knowledge we actually have of how people behave and why they behave the way they do, you’re likely to end up ‘left behind’ in the long run. You end up with something like religious rules, where you have a system of behavioural rules which perhaps sort of made sense, kind of, during a period where people didn’t really know anything about anything, but which makes a lot less sense now because we know better. It’s not hard to argue, though I’m sure some moral philosophers might disagree with me, that it is better to medicate the schizophrenic than to deem him mad and incarcerate him. I make this point explicit because at least judging from this book, I got the impression that the philosophical approach to how to handle ethical systems and evaluate their attributes seems to me to have many things in common with the religious approach, and much less in common with a behavioural sciences approach. Thought-experiments asking questions like how you would/should behave if you happened to find yourself in front of a guy who’s threatening to shoot 20 other people unless you shoot one of them yourself may be useful in terms of illustrating key aspects of an ethical system, but is this kind of analysis really likely to lead you very far? Some of this stuff seems to me not that different from theology. ‘People who act friendly and non-threatening in social situations are more likely to find friends and keeping them’ (or whatever) seems to me to be much more useful information, in terms of how to answer questions such as ‘what is a good (‘ethical’) way to live your life’, than are thought experiments like these and discussions about key assumptions related to those thought experiments. It seems to me that a lot of what these people are doing is adding new floors to the ivory tower and not much else.
In terms of the risk of being left behind comment above, I should note that I’m aware this is perhaps a problematic way to think about things. Some people (especially religious people, presumably) would certainly argue that it makes a lot of sense to adopt sort of a Darwinian approach to meta-ethics and consider the moral systems likely to persist and ‘survive’ to be ‘better’ than the alternatives; in which case religious systems have a lot of things going for them, in part because they’re very good at constraining thinking and suppressing certain lines of thought likely to weaken the systems (like the thought that all this stuff is just made up). Williams talks about related stuff in his coverage – his view is incidentally that such implicit constraints on moral thinking is a good thing, and he considers the absence of such constraints to be a problem with utilitarianism – I decided to include a few relevant quotes on that matter below:
“It could be a feature of a man’s moral outlook that he regarded certain courses of action [...or thought, US] as unthinkable, in the sense that he would not entertain the idea of doing them [...] Entertaining certain alternatives, regarding them indeed as alternatives, is itself something which he regards as dishonourable or morally absurd. But, further, he might equally find it unacceptable to consider what to do in certain conceivable situations. [...] Consequentialist rationality, however, and in particular utilitarian rationality, has no such limitations: making the best of a bad job is one of its maxims”
Something I found interesting in that part is that Williams does not make clear that constraints on moral thinking have the potential to lead to both good and bad ‘outcomes’ (‘lead to ‘better’ or ‘worse’ performing moral systems’ would be a statement inclusive enough to also incorporate non-consequentialist ethical systems, it seems to me, but then a different problem related to what we mean by ‘better’ or ‘worse’ of course pops up. Anyway if you have difficulty conceptualizing this idea it probably makes sense to just model it this way: Constraints on moral thinking may stop you from thinking that it might be a good idea to kill all the jews (the argument being that where people are free to think this thought, the associated outcome becomes more likely), but such constraints may also stop you from thinking that killing jews is wrong, if you happen to live in a society where killing jews is the morally enforced norm), even though a related symmetry argument seems to be used by both proponents and opponents of utilitarianism in the context of events taking place in the far future. Note incidentally on a related, if different, note that when people make moral judgments about a given action, in terms of how long time has passed since the event in question, may have a significant influence on the judgment in question (see viii).
I do not think people use utilitarian systems of thought to decide upon which actions to engage in, and as mentioned previously neither does Smart; he’s careful to point out in his coverage that what he’s defending is a normative system, not a descriptive system. In my view people often don’t know why they do the things they do, and even when they think they do, they probably don’t, really, because there are an incredible number of aspects which are relevant, and people probably often don’t know about half of them. “But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings”, as pointed out in ix. We’re not rational creatures, but we are rationalizing creatures. People may use a utilitarian framework to present the decision context and the decision process, but it’s just a model. I probably differ from Smart also in the sense that Smart may be a lot more optimistic about the feasibility of even applying such a scheme than I am. Smart would probably think about a hypothetical situation in this way: ‘I have thought about this potential action X, and it seems to me that the consequences of this action X would be that one person is made much better off and another person is made slightly worse off. If I do nothing instead, no-one is made either better off or worse off. I wish to maximize average happiness, and so this action seems justified. Thus I shall now proceed to do X.’ I would be more likely to think along these lines: ‘Smart’s primate brain had decided after 2/10ths of a second that Smart wanted to do X. Smart’s primate brain is good at making Smart think he’s in charge, so now Smart’s brain will engage in a bit of work which will yield him the answer ‘he’ already decided upon.’
The utilitarian model is just a model, and/but it’s the type of model which appeal to some types of people more than others. When you look at it like this, it sort of changes how you view the question of whether the question of whether ‘people should use utilitarian systems of thought more’ even makes sense. A book like this will probably in some ways tell you more about the personalities of the authors than it will tell you about the desirability of the more widespread ‘implementation’ (whatever that may mean) of a specific ethical system of thought. There’s no data here, just arguments, so neither of the authors really have a clue, would be my contention, and they probably would not be able to agree about how to even evaluate competing systems if they did. It is not perfectly true that they ‘have no clue’, as e.g. the information problems pointed out in Williams’ account towards the end, where he talks a bit about collective decision making rather than individual-level decision making in a utilitarian framework (the point being that you need a lot of data, which is not available, in order to engage in utilitarian analyses and semi-sensible utilitarian-inspired decision making at e.g. the population level), certainly do have at least some real-world relevance, but I think it’s close enough. One aspect that really irritated me about this coverage is that although there are some potentially valuable distinctions made along the way (people may employ the correct decision rule yet end up with a bad outcome anyway, and such things may be important when making moral judgments (…or judgments about how to best set up compensation schemes in organizations, I’ll add…); when deciding whether or not to praise an action a potentially relevant distinction is to be made between the desirability of the action and the desirability of praising the action), they don’t really get very far. If I ever find myself facing a Mexican who’s about to kill 20 people, I’ll know what to do, but…
Some people might have read some of the stuff above and thought to him/herself that if you’re a hardcore consequentialist/utilitarian who does not care about anything but the consequences of actions and the utility derived from them, then you probably don’t care about whether or not the individual made the decision because he was sleep-deprived or had high levels of testosterone in his blood due to an untreated medical condition. That’s the whole point, that you disregard irrelevant factors like intentions and similar stuff, right? I have sort of assumed this would not be the utilitarian’s reply because in that case the system seems to me to devolve into a caricature very fast (on account of the ‘and similar stuff’ part, not the intentions part), where you lock away the schizophrenic. I think there’s a big difference between including in the analysis people’s explicit justifications for their actions (leading to a ‘you meant well’ judgment) and other, implicit, factors which might also have influenced behaviour (‘the cancer patient was tired and in pain, and that was why she yelled at her neighbour when his dog ran into her garden’). There’s a difference between explaining and explaining away, but they sort of go hand in hand. In case you were not aware of this, this objection does not only relate to individual-level decision-making, as objections with a similar structure can be made in the contexts of population-level decision making, where the behaviours of groups of people may also have explanations/reasons which are relevant to the ethical judgment yet unrelated to the explicit justifications people forward for behaving the way they do. I’m not sure how I feel about the validity of some of the specific arguments to be made in the latter case and how relevant they are/ought to be to the moral judgments to be made, but I did want to mention this aspect to preclude people from perhaps assuming erroneously that even if there are problems at the level of the individual, such problems go away when you start looking at groups of people instead. I don’t think this is true at all, though of course details are different in different social contexts.
I know that I have not really talked a great deal about the actual contents of the book in this post, and if you’re really curious to learn more about what’s in there you’re welcome to ask and maybe I can be persuaded to provide some more details. I was planning to perhaps include a few quotes from the book in a future quotes post, but aside from that I’m not really considering spending any more time on the book here on the blog.
Feedback to the thoughts and ideas presented here are very welcome.