“There are both costs and benefits associated with conducting scientific- and technological research. Whereas the benefits derived from scientific research and new technologies have often been addressed in the literature (for a good example, see Evenson et al., 1979), few of the major non-monetary societal costs associated with major expenditures on scientific research and technology have however so far received much attention.
In this paper we investigate one of the major non-monetary societal cost variables associated with the conduct of scientific and technological research in the United States, namely the suicides resulting from research activities. In particular, in this paper we analyze the association between scientific- and technological research expenditure patterns and the number of suicides committed using one of the most common suicide methods, namely that of hanging, strangulation and suffocation (-HSS). We conclude from our analysis that there’s a very strong association between scientific research expenditures in the US and the frequency of suicides committed using the HSS method, and that this relationship has been stable for at least a decade. An important aspect in the context of the association is the precise mechanisms through which the increase in HHSs takes place. Although the mechanisms are still not well-elucidated, we suggest that one of the important components in this relationship may be judicial research, as initial analyses of related data have suggested that this variable may be important. We argue in the paper that our initial findings in this context provide impetus for considering this pathway a particularly important area of future research in this field.”
“Murders by bodily force (-Mbf) make up a substantial number of all homicides in the US. Previous research on the topic has shown that this criminal activity causes the compromise of some common key biological functions in victims, such as respiration and cardiac function, and that many people with close social relationships with the victims are psychosocially affected as well, which means that this societal problem is clearly of some importance.
Researchers have known for a long time that the marital state of the inhabitants of the state of Mississippi and the dynamics of this variable have important nation-wide effects. Previous research has e.g. analyzed how the marriage rate in Mississippi determines the US per capita consumption of whole milk. In this paper we investigate how the dynamics of Mississippian marital patterns relate to the national Mbf numbers. We conclude from our analysis that it is very clear that there’s a strong association between the divorce rate in Mississippi and the national level of Mbf. We suggest that the effect may go through previously established channels such as e.g. milk consumption, but we also note that the precise relationship has yet to be elucidated and that further research on this important topic is clearly needed.”
This abstract is awesome as well, but I didn’t write it…
The ‘funny’ part is that I could actually easily imagine papers not too dissimilar to the ones just outlined getting published in scientific journals. Indeed, in terms of the structure I’d claim that many published papers are exactly like this. They do significance testing as well, sure, but hunting down p-values is not much different from hunting down correlations and it’s quite easy to do both. If that’s all you have, you haven’t shown much.
Here’s what I wrote about the book on goodreads:
“This book is almost 25 years old, and this is one of the main reasons why I did not give it five stars. Parts of this book is just amazing, but the fact that I felt that it was necessary to continually look up terms and ideas covered in the book made it slightly less fun to read than it could have been. Some parts of the scientific vocabulary applied throughout the book are frankly outdated, and this aspect reflects not only a change in which words are used but also, more importantly, a change in how people think about these things. That progress has been made since the book was written is a good thing, but it did subtract a little from the overall reading experience that I very often felt that I had to be quite careful about which specific conclusions to accept and which to question. It does not help that some of the main conclusions towards the end of the book seem to have been proven, for lack of a better word, wrong.
But all in all it’s really a very nice book – there’s a lot of fascinating stuff in there.”
A few sample quotes from the book:
“a distinction needs to be made between the two major types of animal fossils — body fossils and trace fossils. Body fossils are either actual parts of the organism’s body (such as a shell or a bone), or impressions of body parts (even if the parts themselves have been dissolved away or otherwise destroyed). The imprint of a feather or leaf or the external surface of a shell are examples of body fossils. […] Trace fossils are markings in the sediment (usually made while the sediment was still soft) left by the feeding, traveling, or burrowing activities of animals. Familiar examples of trace fossils include tracks and trails made by worms as they plow through sediment looking for food and ingesting sediment. […] Completely unrelated organisms can make trace fossils which are indistinguishable to paleontologists. Trace fossils are part of the fabric of the sediment, and therefore can be very resistant to destruction by metamorphism of the surrounding rock. Body fossils, on the other hand, are often destroyed by chemical reactions with the surrounding sediment. But body fossils are the only fossil type that can consistently give reliable information about the identity of the organism which left the remains. […] The worst problem in the search for the oldest animal fossils is mistaken identity. Sedimentary rocks are replete with irregular structures and small scale disturbances or interruptions of the horizontal bedding or layering. Some of these disturbances are caused by organisms, but many are not. […] Usually a well-preserved and well-formed trace fossil is unquestionably biologic in origin, and all paleontologists would agree that the trace was formed by an animal. Yet it can be difficult to define precisely what it is about a trace fossil that makes it convincingly biogenic (formed by life). […] A sedimentary structure that resembles, but is in fact not, a trace fossil (or a body fossil, for that matter) is called a pseudofossil. Pseudofossils have plagued the study of Precambrian paleontology because many inorganic sediment disturbances look deceptively like fossils.”
“Convincing trace fossils are known from the late Precambrian, sometimes in association with the soft bodied Ediacaran fossils (Glaessner 1969). These trace fossils are generally simpler, less common, and less diverse than Cambrian trace fossils. There is a significant difference in the complexity and depth of burrowing between Cambrian and Precambrian trace fossils, and it has been argued that the changeover from simple trace fossils to more complex types of traces occurred at more or less the same time as the Cambrian explosion, the first appearance of abundant Cambrian shelly fossils. […] Even shallow, sediment surface burrows in the Cambrian show a marked change in character over their Precambrian predecessors. […] something outstanding happened to the abilities of trace-fossil makers across the Precambrian-Cambrian boundary. Animals discovered a large number of ways to effectively use the sediment as a food resource, and also began to move deeper into the substrate for deposit feeding and homebuilding.”
“Seilacher (1984, 1985) recognizes that flattened body shapes maximize surface area for the takeup of oxygen and food dissolved in seawater, and perhaps also for the absorption of light. “Normal” metazoan animals generally have plump, more or less cylindrical, bodies. For very small, thin skinned animals, cells near the body surface can get oxygen and expel waste by simple diffusion across the cell surface membranes. Waste products such as carbon dioxide will be supersaturated inside of the animal’s body, and will tend to migrate out of its cells and into the open environment. The reverse is true for oxygen; it will tend to migrate into the cells because its concentration is greater on the outside than on the inside of an oxygen-respiring animal. Animals such as frogs and salamanders are able to respire (at least in part) in this way. But for most large, cylindrical animals, diffusion respiration will not work because diffusion is ineffective for cells buried deep within the animal’s body. This is a consequence of the fact that as an animal increases its size, its total volume outstrips its surface area by a large margin. […] metazoans have developed intricate systems of pipework and tubing to deliver nutrient and waste removal services to interior cells. Circulatory systems, digestive tracts, gills, and lungs are all solutions to the problems associated with volume increase.”
“Monoplacophorans […] are cap-shaped shells distinguished by two rows of muscle scars on the interior of the shell. They were thought extinct until living specimens were dredged from the deep sea and described in the late 1950s. Monoplacophorans have had an unusual history of discovery. They are the only group of animals that has been: (a) described hypothetically before being discovered; (b) found as fossils before being found alive,- and (c) dredged from the depths of the oceans before being collected from shallower marine waters (Pojeta et al. 1987). […] Rostroconchs are a major, extinct, order of mollusks that first appeared in the earliest Cambrian. Rostroconchs have a shell that is shaped like a clam shell, except that instead of having an organic ligament connecting the two valves, the two halves of a rostroconch shell are fused together to form a single valve. Despite this fusion, larger rostroconchs look very much like clam fossils with valves still articulated, which partly explains why rostroconchs were not recognized as a major, distinct, group until the 1970s. […] Slightly after the first appearance of rostroconchs, the first true clams or bivalves appear. Clams probably had the same ancestor as the rostroconchs […]. Instead of keeping the two valves fused as in rostroconchs, clams hinged the valves with articulating teeth and a tough, organic ligament. This evidently proved to be the more successful approach, since bivalve shells now litter the beaches all over the earth, whereas rostroconchs dwindled to extinction in the Permian.”
“Of the earliest Cambrian shelly fossils, many groups are truly problematic in the sense that not only do we have no idea what kind of animal made them, but also we have no clear conception of the function or functions of the skeletal remains. […] there is an anomalously high proportion of small shelly fossils that do not belong to later phyla. “Living fossils” are creatures alive today that have undergone very little morphologic change for long stretches (sometimes 100 million years or more) of geologic time. Few living fossils remain from the earliest Paleozoic fauna. […] Many of the groups that were most important in the Cambrian are unimportant or extinct today, for example, the trilobites, the inarticulate brachiopods, hyoliths, monoplacophorans, eocrinoids, the sclerite-bearers, and phosphatic tube-formers. True metazoans were undoubtedly present before the Cambrian, but they were all, with [few] exception[s] […], soft-bodied. New types of soft-bodied animals appear in the Cambrian as well, but our understanding of these forms is restricted to rare finds of Cambrian soft-bodied fossils, which are even rarer than finds of the Ediacaran fauna.”
I’ll just quote that last part again: “our understanding of these forms is restricted to rare finds of Cambrian soft-bodied fossils”.
They’re talking about the findings of soft-bodied organisms who did not make shells or anything like that which lived more than 500 million years ago. To get a sense of perspective in terms of how long ago this is, have a look at this picture – that’s one guess at what we think the Earth might have looked like back then. In my mind, the fact that we know anything at all about soft-bodied animals living back then is pretty amazing to think about.
I could easily write perhaps four posts about this book, but I’m not going to do that. Instead I have decided for now to limit my coverage here to the stuff above and some links to relevant stuff I looked up while reading the book, which I have posted below – I was surprised how much relevant stuff wikipedia has on related matters, and if you’re curious you should really go have a look at some of those links. I should note that I will probably add another post about the book later on with some more observations from the book – it seems wrong to me to limit coverage of this great book to one post, but there’s no way I can cover all the good stuff in there anyway.
Here are as mentioned some relevant wiki links to the kinds of stuff they talk about in this book – most of the links are in my opinion links to articles of what I’d consider to be a ‘reasonable’ length/quality, and although I have not read all of them I’d note that some of them are quite good:
Ediacara biota (featured).
Cloudinid (‘good article’).
Brachiopod (‘good article’).
Bryozoa (‘good article’).
Global Boundary Stratotype Section and Point (noteworthy in this context is that the Precambrian/Cambrian boundary GSSP at Fortune Head had not been decided upon when this book was written – they have a whole chapter about these and related things).
Manorian glaciation (this is not what it’s called in the book, but that is what they’re talking about anyway).
Timeline of glaciation.
Great Oxygenation Event.
I’m not sure how interesting these lectures will be to people reading along here, but I figured I might as well share them.
I finished the book. Here’s what I wrote on goodreads:
“Parts of the coverage was complete crap and/or dealt with stuff that was not remotely interesting to me, but other parts were actually really good – which makes the book sort of difficult to rate. In the end I decided to settle on a two star rating, even though I consider some of the recommendations made in the book to be far from ‘okay’.”
One of those recommendations is to give narcotics to young children with abnormal brains who don’t behave the way the parents would like them to, even though it’s clear that such approaches only deal with symptoms and do nothing to address actual underlying causes (see below: I haven’t quoted extensively from this part of the book, but you should note that I draw different conclusions than do the authors – as I incidentally do in other areas as well..). Ritalin, Adderall, etc.; subjecting young children with brains which are still developing to these types of pharmacological interventions just make my skin crawl, especially as I consider it more or less beyond doubt that part of what is going on here is simply medicalization of normal behaviours.
I have added some observations from the last half of the book below.
“Over the years there have been many [behavioural] interventions developed for young children with ASD [autism spectrum disorder]. Thus far, research has not demonstrated that any particular intervention approach is better than the others. […] The amount of empirical support for different treatment approaches varies significantly. However, in general, more empirical support is needed for all approaches, and studies comparing the efficacy and effectiveness of different approaches are sorely lacking. […] many community-based programs may offer eclectic approaches combining elements of various types of interventions within a single program. Although we are beginning to understand better the types of approaches that seem to be successful for young children with autism and their families, we really have no way of knowing the extent to which eclectic, community-based approaches are successful.”
“Impairment in social communication is one of the diagnostic criteria for ASD; therefore, communication is universally affected in individuals with ASD. Social communication includes many nonverbal aspects such as eye gaze and facial expression. […] The degree of affectedness of spoken language can vary widely in individuals with ASD. Sixty to 70% of individuals with ASD are low-verbal or nonverbal [“30% of individuals with autism develop fluent spoken language”], with substantial difficulty with the understanding of spoken language and the ability to use it for functional communication (Fombonne, 2005). However, most children with ASD develop some spoken language skills [“50% of individuals with autism develop some usable spoken language”]. In a study of a large sample of nine-year-olds with ASD, fewer than 15% were classified as nonverbal (defined as using fewer than five words per day) […]. Children with ASD generally have large amounts of immediate or delayed echolalia in which they immediately imitate what someone else says or repetitively use language they have heard from sources such as television, movies, books, or videogames. […] Echolalia is thought to result because children with ASD have an abnormally large “attentional window” for language, resulting in learning larger “chunks” of language. Multiple words are treated as if they were a single word. […] Even high-functioning individuals with ASD may have difficulty with understanding the social use of language because they interpret words literally and have difficulty making inferences. Highly verbal individual with ASD may monopolize the conversation by going on and on about a topic that is interesting to them and failing to give their communication partner a turn to speak. Language proficiency or verbal ability is consistently associated with positive outcomes in social and adaptive functioning for children and adults with ASD.”
“The employment outcomes for individuals with high-functioning autism and Asperger’s disorder are reported to be generally much lower than would have been expected on the basis of their intellectual functioning (Howlin, 2004)” [Here is one relevant quote from the book which I found after a brief skim: “despite having IQ scores well within the normal range (and sometimes reaching quite high academic levels) the majority of individuals in both the Asperger and high-functioning autism groups studied by Howlin (2003) had no close friends, remained highly dependent on their families for support and had low employment status” – unfortunately no specific data was included in that section. This article has some more general numbers dealing with all individuals on the autism-spectrum – one relevant quote: “The unemployment rate for autistic people seems to be about 66%, according to data from 2009, compared to about 9% for the general population. Some estimates, like Bell’s, are even higher: 80-85% unemployment.”]
“People with ASD require predictability, consistency, and structure”
“Most individuals with ASD are diagnosed in early childhood or by school-age. Some young adults are evaluated for symptoms of high-functioning autism or Asperger’s disorder that have not been identified earlier.” [It seems my experience is not unique… No, I did not think it was…]
“Some common characteristics seen in persons with ASD that may lead to difficulty coping include:
• Challenges in interpreting nonverbal language
• Rigid adherence to rules
• Poor eye-gaze or avoidance of eye contact
• Few facial expressions and trouble understanding the facial expressions of others
• Poor judge of personal space — may stand too close to other students
• Trouble controlling their emotions and anxieties
• Difficulty understanding another person’s perspective or how their own behavior affects others
• Very literal understanding of speech; difficulty in picking up on nuances
• Unusually intense or restricted interests in things (maps, dates, coins, numbers/statistics, train schedules)
• Unusual repetitive behavior, verbal as well as nonverbal (hand flapping, rocking)
• Unusual sensitivity to sensations […]
• Difficulty with transitions, need for sameness
• Possible aggressive, disruptive, or self-injurious behavior […]
Situations that often increase anxiety for persons with Asperger’s disorder and lead to difficulty coping include:
• When conversation involves multiple speakers
• Rapid shifting of topics
• Latency of response
• Difficulty in seeking clarification
• Lack of confidence
• Overabundance of irrelevant information”
“Research supporting the use of CBT [cognitive behavioural therapy] in ASD is limited […] At this stage, it remains difficult to make strong conclusions regarding the efficacy of CBT in people with ASD”
“There are currently three well-accepted theories of social development in ASD. Each will be briefly described below.
Theory of Mind deficit
Definition: Difficulty with both awareness and understanding of another individual’s perspective. This was later referred to as “mind blindness.” Research has shown that most typical children learn this skill by age four, while children with ASD learn this much later, between the ages of nine and 14 years.
The social implications of Theory of Mind deficits (Cumine et al., 1998)
• Difficulty predicting the behavior of others, leading to the avoidance of anxiety-producing situations
• Difficulty reading the intentions of others and understanding motives behind their behavior
• Difficulty explaining their own behavior
• Difficulty understanding emotions, their own and those of others, leading to the appearance of lack of empathy
• Difficulty understanding how their behavior affects how others think or feel, leading to an apparent/perceived lack of conscience or motivation to please others
• Difficulty taking into account what other people know or can be expected to know, leading to the appearance of disorganized cognitive processing
• Inability to read and react to the listener’s level of interest in what is being said
• Inability to anticipate what others might think of one’s actions
• Inability to deceive or understand deception
• Poor sharing of attention
• Lack of understanding of social interactions that enable the initiation and maintenance of social relationships […]
Central Coherence deficit
Definition: Difficulty drawing multiple sources of social and environmental information together, causing problems with understanding the larger contextual picture.
The social implications of Central Coherence deficits (Cumine, et al., 1998)
• Idiosyncratic focus of attention
• Imposition of the individual’s own perspective onto others’ experiences
• A preference for the known
• Inattentiveness to new tasks
• Difficulty choosing and prioritizing
• Difficulty organizing themselves, materials, and experiences
• Difficulty seeing social connections, thus causing problems with generalizing skills and knowledge
• Lack of compliance with directives that they do not understand […]
Executive Functioning deficits
Definition: Difficulty with the following executive functioning skills:
• Self monitoring
• The ability to inhibit various social responses
• The ability to express behavioral flexibility
• Processing and expressing information in an organized and fluid manner
The social implications of Executive Functioning deficits (Ozonoff, 1995)
Causes difficulties with the following:
• Perceiving others’ emotions
• Imitation of social behaviors
• Pretend play, which is essential to early learning
• Planning, organizing, and prioritizing
• Starting and stopping activities, behaviors, and thoughts”
“Just as there is a spectrum of autism, there is also a range of social motivation. Some individuals are extremely interested in engaging with their peers, but struggle to appropriately initiate or maintain social connectedness. Others with ASD have very little social motivation. These individuals often report extreme anxiety when engaging with people for a variety of reasons. For example, it may be difficult to predict how the people around them will behave and, therefore, individuals on the spectrum may chose to avoid all anticipated anxiety-provoking social environments. […] Many people assume that a lack of social initiation and reciprocal communication indicates that individuals with ASD lack the desire to engage in social interaction. On the contrary, many individuals with ASD lack the skills to be successful socially, yet they desire to be a part of social relationships”
“A recent meta-analysis of 55 single-subject research studies revealed that “social skills programs for children with autism are largely ineffective” (Bellini, 2007).”
“no medication has yet been identified that is capable of treating the core features of ASD […] However, various medications
have been used to treat behavioral symptoms in ASD […] Antidepressant medications have been used in the treatment of specific psychiatric comorbid disorders in individuals with ASD. They have also been used to target selected symptoms in ASD such as repetitive preoccupations, preseverative behaviors, and social anxiety. […] The Interactive Autism Network (IAN) reported in 2009 that, out of 5,174 children diagnosed with ASD, 12.2 % of them [were] taking antidepressant medication. […] [The] reports [on antidepressants] have been anecdotal, small case studies, and small designed studies. Investigation has been limited by small sample size, broad age range, and being uncontrolled. The [single] large double-blind, placebo-controlled study of citalopram in 149 children with ASD […] found that there was no significant improvement on multiple measures. […] Antiepileptic medications are typically used for treatment of seizures, which may occur in approximately one third of children with ASD […] Mood-stabilizing antiepileptic drugs (AEDs) have also been used to treat mood instability, agitation and aggression in ASD. Despite the availability of more than a handful of double-blind trials
(most of which failed to support the use of AEDs), surveys of psychopharmacological use among children and adults with ASD suggest fairly frequent use. […] It is not uncommon for individuals with ASD to be prescribed more than one psychotropic medication. For example, in the 2005 ASD psychoactive medication survey conducted by Aman and colleagues, 9.8 % of the sample were identified as taking two different drugs; 7.7 % were taking three drugs […] To date, we are unaware of any randomized controlled trials involving the use of polypharmacy in ASD.”
[I also talked about these aspects in the previous post, but I think they’re more clear about the details in the last part of the coverage than they were in the parts on this stuff I covered earlier, so I’ll include these observations as well:] “Considering that deficits in communication and social behaviors are inseparable and more accurately considered as a single set of symptoms with contextual and environmental specificities, the DSM-IV-TR’s three domains (communication deficits, social deficits, stereotypic interests and behaviors) become two in DSM-5: (a) social/communication deficits, and (b) fixated interests and repetitive behaviors. The newly proposed diagnosis for ASD requires that both criteria to be completely fulfilled. In addition, the current clinical and research consensus appears to be that Asperger’s disorder is part of ASD. Research currently reflects that Asperger’s disorder is not substantially different from other forms of “high-functioning” autism with good formal language skills and good (at least verbal) IQ.”
I liked this book and I gave it 3 stars on goodreads. Much of it was a review of stuff also covered in Sperling et al. (or elsewhere, see also this blog-post which actually includes some of the same data included in the coverage below), but there was some new stuff as well. I’ve added some relevant observations from the book below – I incidentally do not think most of the stuff included in this post should be at all hard to read for people who do not have diabetes.
“Hypoglycemia is a fact of life for most people with type 1 diabetes […] The average patient suffers untold numbers of asymptomatic episodes, two episodes of symptomatic hypoglycemia per week (thousands of such episodes over a lifetime of diabetes), and one episode of severe, temporarily disabling hypoglycemia, often with seizure or coma, per year.
Given increased recognition of the magnitude of the problem of iatrogenic hypoglycemia in type 1 diabetes, and practical improvements in the glycemic management of diabetes, over the nearly two decades since the Diabetes Control and Complications Trial (DCCT) was reported in 1993 (DCCT 1993), one might anticipate that hypoglycemia would have become less of a problem. Unfortunately, there is no evidence of that in population-based studies. For example, in their study reported in 2007, the U.K. Hypoglycaemia Study Group (UK Hypo Group 2007) found the incidence of severe hypoglycemia in patients with type 1 diabetes treated with insulin for <5 years to be comparable to that in the Stockholm Diabetes Intervention Study (Reichard and Pihl 1994) (both 110 per 100 patient-years) reported in 1994 and higher than that in the DCCT”
“the U.K. Hypoglycaemia Study Group (UK Hypo Group 2007) found the incidence of severe hypoglycemia in patients with type 1 diabetes treated with insulin for >15 years (320 episodes per 100 patient-years) to be threefold higher than in individuals treated for <5 years […] Hypoglycemia is particularly common during the night […] A consistent observation since the DCCT (1991, 1993, 1997) is that more than half of the episodes of hypoglycemia, including severe hypoglycemia, occur during the night (Chico et al. 2003; Guillod et al. 2007). […] Antidiabetic drugs, mostly insulin, [have been] found to be second only to anticoagulants as a cause of emergency hospitalization for adverse drug events in people >65 years of age, and those visits [are] almost entirely because of hypoglycemia (Budnitz et al. 2011). […] Overall, hypoglycemia is less frequent in type 2 diabetes than in type 1 diabetes […] the risk of hypoglycemia is relatively low in the first few years of insulin treatment of type 2 diabetes […], [however] the risk increases substantially, approaching that in type 1 diabetes, later in the course of type 2 diabetes […] The prospective, population-based study of Donnelly and colleagues […] indicates that the overall incidence of hypoglycemia in insulin-treated type 2 diabetes is approximately one-third of that in type 1 diabetes […] Because the prevalence of type 2 diabetes is ~20-fold greater than that of type 1 diabetes […] most episodes of iatrogenic hypoglycemia, including severe iatrogenic hypoglycemia, occur in people with type 2 diabetes.”
“The physical morbidity of an episode of hypoglycemia ranges from unpleasant symptoms, such as palpitations, tremulousness, anxiety, sweating, hunger, and paresthesias (Towler et al. 1993), and cognitive impairments with behavioral changes, to seizure, coma, or, rarely, death (Cryer 2007). […] Hypoglycemia causes functional brain failure that is corrected in the vast majority of instances after the plasma glucose concentration is raised […] Prolonged, profound hypoglycemia can cause brain death, but that is very rare and most fatal episodes are the result of other mechanisms, presumably cardiac arrhythmias […] One cardiac mechanism is impaired ventricular repolarization, reflected in a prolonged corrected QT (QTc) interval in the electrocardiogram, which is known to be associated with lethal ventricular arrhythmias. […] Older estimates were that 2 to 4% of people with type 1 diabetes died from hypoglycemia (Deckert et al. 1978; Tunbridge 1981; Laing et al. 1999). More recent reports in type 1 diabetes include hypoglycemic mortality rates of 4% (Patterson et al. 2007), 6% (DCCT/EDIC 2007), 7% (Feltbower et al. 2008), and 10% (Skrivarhaug et al. 2006).”
“The first defense against falling plasma glucose concentrations is a decrease in pancreatic β-cell insulin secretion. The second defense is an increase in pancreatic α-cell glucagon secretion. The third defense, which becomes critical when glucagon is deficient, is an increase in adrenomedullary epinephrine secretion. If these three physiological defenses fail to abort the episode, lower plasma glucose levels trigger a more intense sympathoadrenal (sympathetic neural as well as adrenomedullary) response that causes symptoms and thus awareness of hypoglycemia that prompts the behavioral defense [which is ingestion of carbohydrates]. […] All of these defenses are typically compromised in type 1 diabetes and advanced type 2 diabetes […] compromised glucose counterregulation is the key feature of the pathogenesis of iatrogenic hypoglycemia in type 1 diabetes and advanced type 2 diabetes. Hypoglycemia in diabetes is typically the result of the interplay of relative or absolute therapeutic insulin excess and compromised physiological and behavioral defenses against falling plasma glucose concentrations […] In fully developed (i.e., C-peptide–negative) type 1 diabetes, circulating insulin levels do not decrease as plasma glucose concentrations decline through or below the physiological range. […] Furthermore, circulating glucagon levels do not increase as plasma glucose concentrations fall below the physiological range […] Thus, both the first defense against hypoglycemia — a decrease in insulin levels — and the second defense against hypoglycemia — an increase in glucagon levels — are lost in type 1 diabetes. Therefore, patients with type 1 diabetes are critically dependent on the third defense against hypoglycemia, an increase in epinephrine levels. However, the epinephrine secretory response to hypoglycemia is typically attenuated in type 1 diabetes […] Through mechanisms yet to be clearly defined but often thought to reside in the brain […], the glycemic threshold for sympathoadrenal — both adrenomedullary and sympathetic neural — activation is shifted to lower plasma glucose concentrations by recent antecedent hypoglycemia […], as well as by prior exercise […] and by sleep […] The reduced responses to a given level of hypoglycemia cause the clinical syndromes of defective glucose counterregulation and hypoglycemia unawareness [which is] impairment or even complete loss of the warning, largely neurogenic symptoms that previously prompted the behavioral defense, the ingestion of carbohydrates. Hypoglycemia unawareness—or more precisely impaired awareness of hypoglycemia—is common in type 1 diabetes […] Compared with patients with type 1 diabetes who have absent insulin and glucagon responses but have normal epinephrine responses, patients with absent insulin and glucagon responses and reduced epinephrine responses have been shown to be at 25-fold […] or greater […] increased risk for severe iatrogenic hypoglycemia during aggressive glycemic therapy […] At least in part because of the clinical importance of hypoglycemia in people with diabetes, studies of the molecular and cellular physiology and pathophysiology of the CNS [central nervous system]-mediated neuroendocrine, including sympathoadrenal, responses to falling plasma glucose concentrations are an increasingly active area of fundamental neuroscience research.”
“The risk factors for hypoglycemia in people with diabetes […] follow directly from the pathophysiology of glucose counterregulation in diabetes […]. The principle is that iatrogenic hypoglycemia in type 1 diabetes and advanced type 2 diabetes is typically the result of the interplay of relative or absolute therapeutic insulin excess and compromised physiological and behavioral defenses against falling plasma glucose concentrations, i.e., hypoglycemia-associated autonomic failure (HAAF) in diabetes.
People with diabetes are not immune to hypoglycemia caused by mechanisms other than the treatment of their diabetes […]. Those include 1) an array of drugs […] including alcohol, 2) critical illnesses such as renal, hepatic or cardiac failure, sepsis, or inanition, 3) hormone deficiency states such as adrenocortical failure, 4) nonislet tumor hypoglycemia, 5) endogenous hyperinsulinism, and 6) accidental, surreptitious, or even malicious hypoglycemia. However, aside from drug effects, those mechanisms are very uncommon. […] if all other factors are the same, patients treated to lower, compared with higher, A1C levels are at higher risk for hypoglycemia. Stated differently, studies with a control group treated to a higher A1C level consistently report higher rates of hypoglycemia in the group treated to a lower A1C level in type 1 diabetes […] and type 2 diabetes […] lower mean plasma glucose concentrations and greater plasma glucose variability are also associated with a higher risk of hypoglycemia […] Improved glycemic control before and during pregnancy is particularly important in the short term because it improves pregnancy outcomes in women with type 1 diabetes. But, it increases the frequency of hypoglycemia substantially […] In one series, 45% of 108 women with type 1 diabetes suffered severe hypoglycemia during their pregnancies; compared with a prepregnancy rate of 110 per 100 patient-years, the incidence was the equivalent of 530, 240, and 50 episodes per 100 patient-years in the first, second, and third trimesters, respectively (Neilsen et al. 2008).”
“Based on a systematic review and meta-analysis of randomized controlled trials published up to 2012, Yeh et al. (2012) concluded that CSII [Continuous subcutaneous insulin infusion] (compared with MDI [multiple daily injection]), real-time CGM [continuous glucose monitoring] (compared with SMPG [self-monitored plasma glucose]), and sensor-augmented CSII (compared with MDI and SMPG) had not been shown to reduce the incidence of severe hypoglycemia in type 1 or type 2 diabetes. […] these technologies may, or may not, be shown to reduce the frequency of hypoglycemia in the future.”
I recently realized that I had actually never read a textbook like this on this topic. I did get some reading materials back when I got diagnosed so it’s not like I’ve never read anything about the stuff (and there was a lot of verbal information back then as well), but as mentioned I haven’t read a text on the topic. It was actually due to the old reading materials in question that I ended up deciding to read this book; I was looking for some other stuff the other day and I ended up perusing some of these materials (which I hadn’t seen in years), and I figured I should probably go read a book on the topic. Now I am.
The book is sort of okay. There are various complaints one might make, the most important one of which in the context of me reading the book is perhaps that children with autism-spectrum disorders grow up and become adults, and adults prefer to read chapters about adult stuff, not stuff about e.g. how to teach the preschooler with the diagnosis social skills. I’ve read roughly half the book at this point, and there’s not in my opinion been enough stuff about the adult setting at this point. Another complaint is that I as usual am somewhat mistrustful when guys like these talk about the conclusions to be drawn from some types of empirical evidence; the coverage has in my opinion been of a decidedly mixed quality in terms of the stuff dealing with behavioural interventions, in the sense that they on the one hand at one point reasonably frankly acknowledge that the evidence is sparse and of poor quality, and on the other hand later on seem to become very excited about a longitudinal study and start drawing big conclusions from that single study – which would be sort of fine, I like longitudinal studies, if not for the fact that the study was based on 6 (!) individuals. Similar things happen elsewhere in that part of the coverage – potential power issues are never mentioned in the book, at least they have not been so far – you find yourself reading about a ‘seminal’ study on 19 individuals, and then you move on to their comments about how there have been several other studies supporting those findings, including a study looking closer at 9 of the individuals involved in the original study. Sometimes it’s hard to know what to think, especially in the situations where the only people evaluating the interventions are the people who came up with them in the first place – this doesn’t seem like a particularly smart way to conduct business, though in some parts of psychology it seems to be more or less standard practice.
The stuff on behavioural interventions has in my opinion been some of the weakest stuff in the book so far, which is why I have not talked about this stuff in my coverage below. Some of the proposed interventions are incredibly expensive, and there’s probably a good reason why such things are usually not covered by public health care systems, however the authors do not really seem to consider economic aspects to be all that important, except to the extent that economic factors unfortunately restrict access to all these nice things we could do for these children; they’re aware that parents may not be able to afford the treatment options which are recommended at this point by people who would benefit from these treatment options being more widely used, but they don’t seem to be aware of the existence of things like cost-effectiveness analyses. It’s one thing to argue that there may be developmental gains to be achieved by early childhood interventions (I’ve previously done work in educational economics and I can tell you that it is a common finding in this literature that you can improve outcomes by throwing lots of money and attention after young children – a finding which should perhaps not be super surprising..), it’s quite another thing to argue that the specific interventions comtemplated are cost-effective. To be fair, cost-effectiveness is incredibly hard to evaluate when you’re contemplating evaluating interventions which may have effects lasting basically the rest of the life of the individual and the intervention is supposed to take place during the first years of a child’s life, but in my opinion you sort of need to at least pretend to try to address this aspect somehow; if you don’t, you’re quite likely to end up in a situation where it seems as if you’re acting as if there’s no (societal) budget constraint, and the authors of this book seem to me to move very close to this position at various points in the coverage.
I knew very little (nothing?) about autism-spectrum disorders before I got the diagnosis – I got diagnosed very late, in my adulthood. It’s sort of funny how you can miss important stuff like this without even knowing, and in a way it relates to a point which came up in my recent post on ethics, specifically the point that ‘bad’ people tend to think they are ‘good’ people, or at least no worse than average. How much do you really know about how good other people are at, say, interpreting nonverbal social signals? Would withdrawal from social interaction make the comparison easier or harder? If you don’t really engage in the normal patterns of non-verbal information exchanges, e.g. eye contact exchanges, during social situations, how are you to know that important information is contained in such exchanges? Individuals seem to make assumptions about these things to a large extent based on what they know themselves (about themselves?), and if you have limitations in these areas it may be difficult to figure out that this is the case; another apt analogy might be children who need glasses early on in their lives – we screen for vision impairment in young children in part because young children don’t know, and may never on their own ever realize, that the world is not supposed to be blurry, and that you’re actually supposed to be able to see all the letters written down on the blackboard.
I thought I should make one thing clear before moving on to the main text, a point particularly relevant considering the comic which I decided to start out with; which is that incompetence should not be equated with/interpreted as malicious intent. It seems to me that many people conceive of people with autism-spectrum disorders as inconsiderate jerks who don’t have a clue – I’ve seen quite smart people state relatively similar things in the past. I dislike the ‘jerk’-model because I try to be thoughtful and considerate when interacting with others, and when these people think that way I feel that they’re devaluing the work I put into this stuff. One important problem which is sort of hard to figure out how to deal with is that I’m well aware that the more thoughtful and considerate I am (…or is it: ‘try to be?’) during social encounters, the more taxing the social interactions may become, and taxing social interactions lead to social isolation and withdrawal. Coming up with a good equilibrium level of effort is not an easy task, and I think one needs to address aspects like these before making strong judgments about things like the jerkishness of specific behaviours. In a way people with social anxiety have similar concerns which other people also cannot observe (in this case it would be excessive amounts of thinking during social situations about whether they are doing stuff right now that may mean that they’ll get rejected by others, which then leads to oversensitivity to clues of rejection, leading to social avoidance because of perceived rejection). Of course people with autism-spectrum disorders may be anxious as well, as also mentioned in the coverage below. The level of self-awareness varies a lot in people with autism-spectrum disorders, but people with relatively high levels of self-awareness may certainly face some constraints and tradeoffs which are not immediately obvious to the outsider and which may actually be assumed by neurotypicals to be absent, given the diagnosis.
The textbook answered one question I’d been thinking about a few times without ever worrying enough about it to actually seek out an answer, which is the question of what the recent diagnostic changes might mean, given that I have a diagnosis which by now has been ‘retired’. It turns out that I was diagnosed with what in the textbook are considered to be the ‘gold-standard tools’, which means that this remark related to the recent diagnostic changes that have taken place seems to answer the question: “The DSM 5 noted that “Individuals with a well-established DSM-IV diagnosis of” “Asperger’s disorder” “should be given the diagnosis of autism spectrum disorder””. I’m not going to ‘ask’ for a ‘new’ diagnosis (/a ‘translation’ of my diagnosis) (and quite aside from what other people like to call this stuff, I like the word ‘eccentric’ a lot better than the word ‘autistic’…), but it’s nice to know which recommendations are being made in this area. Some of the quotes below also relate a bit to these aspects.
I’ve added some quotes from the book below.
“Autism is a developmental neurobiological disorder characterized by severe and pervasive impairments in reciprocal social interaction skills and communication skills (verbal and nonverbal), and by restricted, repetitive, and stereotyped behavior, interests, and activities. […] Autism and autistic stem from the Greek word autos, meaning “self.” The term autism originally referred to a basic disturbance, an extreme withdrawal of oneself from social life, or aloneness. […] The critical point in the scientific history of autism was in 1943, when Leo Kanner published Autistic Disturbances of Affective Conduct, a groundbreaking paper that described the symptoms of 11 children presenting similar behaviors that had not been previously recognized. […] Based on Kanner’s terminology, autism was considered for years a psychosis, and child psychiatrists were using “childhood schizophrenia” and “child psychosis” in autism as “interchangeable diagnoses.” […] A parallel line of inquiry to that of Kanner and Eisenberg is represented by the work of Hans Asperger.”
“In Autism and Pervasive Developmental Disorders, Fred Volkmar and Catherine Lord (2004) distinguished important points of differentiation and similarities between Kanner’s and Asperger’s descriptions. […] In concluding their comparison of Kanner’s and Asperger’s descriptions, Volkmar and Lord pondered whether, despite the relevant differences, it was “scientifically and clinically helpful to classify individuals with these traits into separate categories of autism or Asperger’s disorder, or whether it would be better to treat them as parts of a greater continuum.” The utility of the “greater continuum” has led to the category of autism spectrum disorder to be proposed for DSM-5. […] As a result of [various findings] and the lack of reliability in the community in making distinctions among the ASDs [Autism-Spectrum Disorders] [for example: “Variations in clinical severity among ASD cases are not valid indices of differences in pathophysiology or etiology”], the Fifth Edition of the Diagnostic and Statistical Manual (DSM-5) proposes to collapse all of these clinical syndromes into a single diagnosis of “autism spectrum disorder.” Although this revision is appropriate for community diagnosis, and thus the allocation of clinical and support services, research studies will continue to rely on research diagnostic instruments like the Autism Diagnostic Interview (ADI) and the Autism Diagnostic Observation Schedule (ADOS) [these were both part of my work-up, US] to make categorical distinctions between “autism and not autism” and “autism and autism spectrum disorder” (which includes Asperger’s disorder and PDDNOS [Pervasive Developmental Disorder Not Otherwise Specified]). These distinctions have played a vital role in advancing our understanding of the behavioral and neural profile of ASD over the past two decades”
“Recent studies and reports from the Centers for Disease Control […] have shown an increase in the prevalence of children diagnosed with an ASD to one in 110 […] The reported increase is thought to be attributable to several factors. First, there have been changes in diagnostic practices […] Second, there is greater public awareness of ASD and more case-finding […] Finally, there has been a tendency to diagnose many children with intellectual disability as PDD. […] no evidence currently exists to support any association between ASD and a specific environmental exposure. […] Numerous studies have failed to demonstrate a causal relationship between immunizations, particularly thimerosal-containing vaccines, and ASD […] The CDC (2009) reports the median age for a diagnosis of ASD to be between 4.5 and 5.5 years. […] the ASD diagnosis is four times more common for boys than in girls.”
“The essential features of Asperger’s disorder are severe and sustained impairment in social interaction (criterion A); and the development of restricted, repetitive patterns of behavior, interests, and activities (criterion B); which must cause clinically significant impairment in functioning (criterion C). There are no clinically significant delays in language (criterion D) or cognitive development (criterion E).”
“ASD (excluding Asperger’s disorder) has early language and communication impairment. […] almost two thirds of individuals with ASD also have ID [intellectual disability] […] 15%–20 % of cases of ASD are now linked to genetic or
chromosomal abnormalities […] Fragile X Syndrome (FXS) [is] the most common identifiable cause of ASD and the most common inheritable cause of ID. […] Thirty percent of individuals with FXS demonstrate characteristics of ASD.” [In some other conditions penetrance is even higher – examples that could be mentioned are 15q duplication and Timothy Syndrome, but prevalence is lower in these cases and especially in the latter case some might argue that the autism is the least of that child’s problems..]
“Challenging behaviors [in individuals with ASD] may reflect pain that is not communicated verbally […] Challenging behaviors may [also] reflect the child’s difficulty with communication, changes, new places, new situations, new experiences, new sounds, new smells, and new people” [I wonder if you can spot a pattern here in terms of what these children (/people) don’t like? I think an important distinction here is to be made between curiosity and the desire to try out new things. I’m often, hesitant, about trying out new things, yet I’m also quite curious about a lot of things. Be careful which categories you apply here and how they may impact your thinking… In a related vein:] “Insistence on sameness and difficulty with change are common symptoms of an ASD. These behaviors should not typically be considered a behavior done to exert control over others.”
“Psychiatric comorbidity is now acknowledged as quite common in ASD [and] psychiatric comorbidity increases the level of impairment […] There is a handful of questionnaires [aiming at spotting psychiatric comorbidities] that have been developed specifically for use in developmentally disordered or ASD populations. […] none of the measures has the level of research support possessed by questionnaires used in other branches of psychiatry. The vast majority of these instruments have just one study behind their development, or have been studied only by the developer of the instrument. […] one of the main challenges in diagnosing psychiatric disorders in individuals with ASD is the possibility of different presenting symptoms and difficulty in differentiating impairment related to the underlying ASD from impairment due to a separate condition. […] While we do not want to miss true comorbid diagnoses, over-diagnosing comorbidity can be equally harmful. […] Mood disorders, such as depression and bipolar disorder, in ASD have recently begun to receive a great deal of attention […] there are many potential psychosocial stressors that could be possible triggers. For example, higher-functioning individuals who are aware of their deficits and badly desire friends, but lack success in this area, are at particular risk. […] Although there is little research on emotion regulation in ASD, there is clear evidence that emotion regulation is highly variable and often problematic in this population, regardless of psychiatric comorbidity […] Therefore, particularly for mood disorders, it is imperative to consider baseline functioning and not over-diagnose mood disorders when the concern may be more temperamental in nature.”
“Anxiety is considered by some to be the most common comorbid psychiatric concern in ASD […]. The DSM-IV-TR notes that individuals with ASD might have unusual fear reactions, and it is also not uncommon for there to be a general tendency toward anxiety for many individuals with ASD. […] There are many aspects of having an ASD that may lead to this increased risk for anxiety, to the degree that some consider anxiety and the social impairment in ASD to have a bidirectional relationship […] An increase in self-awareness is considered a risk factor for higher anxiety; therefore, anxiety is typically thought of as more common among individuals with ASD who have higher intellectual abilities, and older children, adolescents, and adults.”
“autism may be conceptualized as a disorder of complex information processing resulting from disordered development of the connectivity of cortical systems (e.g., failure of cortical systems specialization) […] approximately 15%–20% of infants with an older sibling diagnosed with autism will ultimately be diagnosable with ASD by three to four years of age. […] [Findings from longitudinal sibling studies] do not support the view that autism is primarily a social-communicative disorder and instead suggest that autism disrupts multiple aspects of development rather simultaneously. […] When both elementary and higher-order abilities in many domains are assessed, it becomes evident that deficits exist in several domains not considered to be integral parts of the autism syndrome, including aspects of the sensory-perceptual, motor, and memory domains. Furthermore, there are enhanced skills and impaired abilities within the same domains as deficits (e.g., memory, language, abstraction). […] Causal explanations for ASD must account for the comprehensive pattern of both deficits and intact aspects of the disorder both within and across multiple domains. […] There is no single primary deficit or triad of deficits, brain regions, or neural systems causing autism. […] Rather, autism broadly affects many abilities at the same time and systematically from its earliest presentation and throughout life. […] This pattern [can] be characterized overall as reflecting a disorder of complex or integrative information processing, which results from altered development of cerebral cortical connectivity in ASD. […] Just as the infant sibling studies have clearly demonstrated, studies of children and adults with autism have also demonstrated a broad but selective profile of deficits and intact or enhanced abilities that all reflect a relationship to information-processing demands. […] it is likely that genes affecting signaling pathways that regulate neuronal organization are strongly implicated in the etiology of autism.”
“ASD is now conceptualized as a developmental neurobiological disorder affecting elaboration of the forebrain circuitry that underlies the abilities most unique to human beings. […] Wiring the brain requires that neurons proliferate, acquire the correct identities, migrate to the appropriate locations, extend axons, and make guidance decisions with a high degree of spatial and temporal fidelity. Converging evidence indicates that more than one of these processes may be altered in various combinations to produce the heterogeneous phenotypes observed in ASD. […] Studies examining head circumference (HC) and brain volume (BV) in individuals with ASD have demonstrated altered brain growth trajectories across the lifespan. […]
• Up to 70 % of infants with ASD exhibit abnormally accelerated brain growth in the first year of life. Approximately 20% to 25% of infants in this subset actually meet formal criteria for macrocephaly (i.e., HC of 2.0 standard deviations above the mean) in the first year.
• BV is significantly larger by two to four years of life, and some children meet criteria for megalencephaly (i.e., BV 2.5 S.D. above mean).
• The first two years of life are usually a period of rapid brain growth in infants as neurons undergo significant postnatal growth in cell size and elaboration (actually overproduction) of axons, synapses, and dendrites. It is possible that this process is exaggerated somehow in at least a subset of ASD.
• Whatever the neurobiological basis, abnormal growth rates in ASD tend to decline significantly after the initial acceleration, causing an apparent “normalization” of BV by adolescence or early adulthood. […]
At the time of maximal brain growth in very early childhood, cerebral gray matter (GM) and white matter (WM) are both increased […] The frontal cortical GM and WM show the most enlargement, followed by the temporal lobe GM and WM and the parietal GM.”
“Thus far, [fMRI and fcMRI] studies have identified underconnectivity with the frontal cortex as a specific characteristic of the altered connectivity in autism, and this characteristic is present across the same wide range of domains of complex information processing that are affected in the disorder, including social, language, executive, and motor processes. […] measures of functional connectivity between specific areas have been shown to reliably predict the degree of impairment in specific domains among those diagnosed with autism. For instance, individuals with poorer social functioning measured by the ADI-R show lower functional connectivity between frontal and parietal cortices. These findings gave rise to the underconnectivity theory in autism, which now has sufficient support that it is accepted as a central feature of the pathophysiology of autism […] Results from these studies are consistent with the notion that autism is a disorder of distributed neural systems (e.g., the connections between structures rather than the structures themselves). […] Diffusion-weighted imaging measures the direction and speed of microscopic water movement in the brain, allowing inferences about the microstructure of the tissue that constrains such movement. These studies have consistently found reduced structural integrity of white matter in adults with ASD, indicating reduced anatomical connectivity […] like measures of functional connectivity, measures of anatomical connectivity derived from diffusion imaging have been shown to reliably predict symptom severity among individuals with autism.”
“In thinking about the genetic basis of autism, it is important to contrast syndromic (or complex) and non-syndromic (or idiopathic/essential) ASD. […] Syndromic ASD includes identifiable autism syndromes with known genetic causes, such as tuberous sclerosis complex, Fragile- X syndrome, Rett syndrome, and Smith-Magenis syndrome.
• Syndromic ASD is associated with a relatively higher propensity for dysmorphic features (including anatomical brain abnormalities), intellectual disability (ID), seizures, and female sex (sex ratios are almost equal).
• Syndromic ASD is also associated with a higher frequency of chromosomal abnormalities in general, many of which have been identified […]. However, it is not yet clear for many of these syndromes which features are typical of autism and which are unique.
Non-syndromic ASD is also called idiopathic autism and consists of cases with and without identifiable micro-deletions or duplications to the DNA. […] individuals with idiopathic ASD are more likely to be male, with sex ratios approximately 1:4 (F:M) but approaching 1:7 in milder cases.”
“Overall, approximately 10 % of children being evaluated for ASD are found to have an identified medical condition with a known genetic lesion such as Fragile X or tuberous sclerosis. An additional 10 % or more have an identifiable chromosomal structural abnormality or copy number variation associated with ASD. […] Recent genome-wide scans using microarray technology have demonstrated a substantial role for small chromosomal deletions or duplications (i.e., copy number variation or CNV) in the etiology of ASD. […] There is [however still] considerable debate concerning the genetic architecture underlying […] the majority of idiopathic autism. Arguments can be made for either the effects of single, but rare Mendelian causes (for which documented CNVs are presumably the tip of the iceberg) or the interaction of numerous common, but low-risk alleles. Genetic linkage and association studies have been traditionally employed to address the latter model, but have failed to consistently identify susceptibility loci.” [An important point I should perhaps make before finishing this post is that if incidence/prevalence of a condition is increasing fast in a population, which seems to be the case here, such an increase is in general considered to be unlikely to only be the result of genetic changes at the population level – that type of pattern is usually indicative of environmental factors playing an important role. It may well be that the ‘average cause’ is different from the ‘marginal cause’, and that it may be a good idea to be careful in terms of which tools to use to explain base rates and growth rates. It might be argued that increased assortative mating among nerds in Silicon Valley has increased incidence locally (I’m sure this might be argued as I’m quite sure I’ve seen this exact argument before…) and I’m not saying this may not be the case, but if close to one percent of the American population get diagnosed, what goes on in Silicon Valley probably isn’t super relevant one way or the other – only roughly 1% of the population live in that area altogether. Even if you were to argue that a similar process is going on everywhere else in the country, it sort of strains belief that ‘something else’ is not going on as well].
“The system of normative ethics which I am here concerned to defend is […] act-utilitarianism. […] Roughly speaking, act-utilitarianism is the view that the rightness or wrongness of an action depends only on the total goodness or badness of its consequences, i.e. on the effect of the action on the welfare of all human beings (or perhaps all sentient beings).”
The book is simple: The first half tells you why (act-)utilitarianism is great, and the second half tells you why utilitarianism sucks.
I’ve been unsure how to blog this book, and as I’m writing this I have still yet to decide what’s the best approach. It probably makes sense to start out with some general remarks. The first general remark is that I liked Smart’s half (the first half) better than Bernard Williams’ half, and I did that to a significant degree because it is in my opinion much easier to read and understand than especially the first half of the second half of the book – regardless of the merits of the arguments, I simply think J.C.C. Smart is a much better writer than is Bernard Williams. There are some important points hidden away in Williams’ account, but in my opinion he waffles so much you sometimes don’t really care one way or the other. Trained philosophers may disagree, but I’m not used to read philosophical texts and stuff like that is part of the reason.
The second general remark is that this book reminded me why I don’t really care about moral philosophy in the first place. Moral judgments don’t really interest me very much. Coming up with elaborate systems (or, in some cases, not-so-elaborate systems) of thought which allows some action patterns and disallows others, evaluated by considering how these systems perform in hypothetical scenarios which may or may not ever happen to anyone you know (“the common methodology of testing general ethical principles by seeing how they square with our felings in particular instances”, as Smart puts it in the book..), or perhaps evaluated by figuring out if the systems are self-consistent or not, simply seems to me a strange approach to how to identify good decision(/justification) rules.
I have come to realize that my opinion of the coverage – but perhaps especially Smart’s account – is influenced by some thoughts I had a while back and discussed with a friend last week. I was at the time considering blogging some of those thoughts, but I decided against it. Anyway these thoughts relate to how knowledge may shape how you think about stuff; this specific topic is actually covered in the book, though from a very different angle. I hold to the view that thinking which is more or less unconstrained by knowledge will most often be a very inferior type of thinking to the kind of (‘directed…’ was the word my friend used, a good word in this context I think) thinking which is constrained by data. What I came to realize along the way was that what I was really missing in this book was some actual knowledge about how humans behave, some understanding of why people behave the way they do, and how such aspects intersect both with which types of behaviours may in theory be ‘permissible’ or not, and why people think the way they do about the thoughts they have and the actions they engage in. We know some stuff about those kinds of things, books have been written about such things – for a neat little book on related topics, see Tavris & Aronson’s account. Smart mentions in his part of the book that: “If […] act-utilitarianism were put forward as a descriptive systematization of how ordinary men, or even we ourselves in our unreflective and uncritical moments, actually think about ethics, then it is of course easy to refute […] [But] it is precisely because a doctrine is false as description and as explanation that it becomes important as a possible recommendation.”
‘People don’t seem to make moral judgments the way I’d like them to, but if they did the world would be a better place’ may be true or it may not be true, but when your argument is founded on logic and you don’t really have good data to suggest that this approach to making moral judgments actually leads to better ‘moral outcomes’ (whatever that may mean – but then again the proponent of such a view is free to define his terms and then argue why his system is better, as that is how people do in other areas, so this caveat may not be important) then I don’t really think you have a very strong case. People (well, some people – it’s probably mostly other economists…) occasionally criticize economists harshly when they fail to take general equilibrium effects into account when making policy recommendations based on partial-equilibrium analyses (‘the employment effects of a job programme involving 500 people may be very different from the employment effects of the same type of job programme scaled up so that it involves 50.000 people’); what these guys are doing is in some sense even worse, as they’re really arguing without any data at all – “I think this”, “I think that”.
I’m sure this kind of stuff related to things like how you approach the topic of meta-ethics and where people stand on things like the non-cognitivist approach Smart talks about in his introduction, but I’m not well-versed in such matters. What I will say is that given what I know about many other topics (primatology, (/behavioural) economics, medicine, psychology, evolutionary biology, anthropology, …), I think the sort of approach these guys have to all of this stuff is not very ‘useful’; in my opinion you need to know and understand a lot of stuff about why people behave the way they do in order to even be in a position where you are justified in having any sort of opinion about how to evaluate the things people do or think in the first place. And these guys have not convinced me they know a lot about things aside from the sort of things philosophers know about this sort of stuff. I’ll go into more detail about these aspects below, but before doing that I would point out that another way to approach moral questions from the one they apply would be to identify/define specific outcomes, behaviours or motivations of interest, analyze variation in data on these variables, and figure out if there are some useful patterns to be found. Perhaps people who commit murder have things in common, and perhaps some of the variables they have in common can be addressed/modified by policies and/or behavioural change at the individual level. I’m not a philosopher, this is more along the lines of ‘where I’m coming from’.
In terms of ‘the stuff I know’ I alluded to above, a few examples are probably in order to get at some of the issues:
i. “Research on parent-child conflict during the first decade of life most often has focused on emotional outbursts, such as temper tantrums […] and coercive behavior of children toward other family members as evidence of conflict. The frequency of such behavior begins to decline during early childhood and continues to do so during middle childhood […] The frequency of episodes during which parents discipline their children also decreases between the ages of three and nine […] research on conflict management in this period has focused on the relative effectiveness of various parental strategies for gaining compliance and managing negative behaviors.” (link)
ii. “The result of an interview is usually a decision. Ideally this process involves collecting, evaluating and integrating specific salient information into a logical algorithm that has shown to be predictive. However, there is an academic literature on impression formation that has examined experimentally how precisely people select particular pieces of information. Studies looking at the process in selection interviews have shown all too often how interviewers may make their minds up before the interview even occurs (based on the application form or CV of the candidate), or that they make up their minds too quickly based on first impression (superficial data) or their own personal implicit theories of personality. Equally, they overweigh or overemphasise negative information or bias information not in line with the algorithm they use.” (link)
iii. “many doors in life are opened or closed to you as a function of how your personality is perceived. Someone who thinks you are cold will not date you, someone who thinks you are uncooperative will not hire you, and someone who thinks you are dishonest will not lend you money. This will be the case regardless of how warm, cooperative, or honest you might really be. […] a long tradition of research on expectancy effects shows that to a small but important degree, people have a way of living up, or down, to the impressions others have of them. […] judges use stereotypes as an important basis for their judgment only when they have little information about the target. […] When you know someone well you can base your judgments on what you have seen. When you have little information, you fall back on stereotypes and self-knowledge.” (link)
iv. “The need for closure (NFC) has been defined as a desire for a definite answer to a question, as opposed to uncertainty, confusion, or ambiguity […] People exhibit stable personal differences in the degree to which they value closure. Some people may form definitive, and perhaps extreme, opinions regardless of the situation, whereas others may resist making decisions even in the safest environments. […] Taken together, the research on intrapersonal processes demonstrates that people who are high in NFC seek less information, generate fewer hypotheses, and rely on early, initial information when making judgments. […] The manner in which people interpret their own and other people’s behaviors and outcomes is linked predictably with their self-esteem and self-concepts. […] a large body of research on attribution processes shows that people high in self-esteem take credit for their successes and blame their failures on external factors […] In contrast, people low in self-esteem are less inclined to take credit for their successes and more inclined to assume responsibility for their failures” (link)
v. “All addictive drugs are subjectively rewarding, reinforcing and pleasurable . Laboratory animals volitionally self- administer them , just as humans do. Furthermore, the rank order of appetitiveness in animals parallels the rank order of appetitiveness in humans […] it is relatively easy to selectively breed laboratory animals for the behavioral phenotype of drug-seeking behavior (the behavioral phenotype breeds true after about 15 generations in laboratory rodents)” (link)
vi. “Psychological autopsy studies in the West have consistently demonstrated strong associations between suicide and mental disorder, reporting that 90% of people who die by suicide have one or more diagnosable mental illness” (link)
vii. “Evolutionary explanations are recursive. Individual behavior results from an interaction of inherited attributes and environmental contingencies. In most species, genes are the main inherited attributes, but inherited cultural information is also important for humans. Individuals with different inherited attributes may develop different behaviors in the same environment. Every generation, evolutionary processes — natural selection is the prototype — impose environmental effects on individuals as they live their lives. Cumulated over the whole population, these effects change the pool of inherited information, so that the inherited attributes of individuals in the next generation differ, usually subtly, from the attributes in the previous generation. […] Culture is a system of inheritance. We acquire behavior by imitating other individuals much as we get our genes from our parents. A fancy capacity for high-fidelity imitation is one of the most important derived characters distinguishing us from our primate relatives […] We are also an unusually docile animal (Simon 1990) and unusually sensitive to expressions of approval and disapproval by parents and others (Baum 1994). Thus parents, teachers, and peers can rapidly, easily, and accurately shape our behavior compared to training other animals using more expensive material rewards and punishments.” (link)
viii. “When two people produce entirely different memories of the same event, observers usually assume that one of them is lying. […] But most of us, most of the time, are neither telling the whole truth nor intentionally deceiving. We aren’t lying; we are self-justifying. All of us, as we tell our stories, add details and omit inconvenient facts […] History is written by the victors, and when we write our own histories, we do so just as the conquerors of nations do: to justify our actions and make us look and feel good about ourselves and what we did or what we failed to do. If mistakes were made, memory helps us remember that they were made by someone else. If we were there, we were just innocent bystanders. […] We remember the central events of our life stories. But when we do misremember, our mistakes aren’t random. The everyday, dissonance-reducing distortions of memory help us make sense of the world and our place in it, protecting our decisions and beliefs. The distortion is even more powerful when it is motivated by the need to keep our self-concept consistent; by the wish to be right; by the need to preserve self-esteem; by the need to excuse failures or bad decisions; or by the need to find an explanation, preferably one safely in the past” (link)
ix. “The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people—inferring who we are and what we like from our actions. […] We may not always know exactly why we do what we do, choose what we choose, or feel what we feel. But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings.” (link)
One key point is that people are different, in all sorts of ways. They’re systematically different in terms of behavioural dispositions, and some behaviours may to a great extent be simply the result of biological factors (drug abuse is certainly relevant here, and suicide probably is as well. These are relevant to the discussion not just because there are relevant differences in behavioural dispositions, but also because people tend to think they ought to have views about the ethics of these behaviours). If individuals are different and such differences are important in terms of which actions the individuals are likely to engage in, it might be natural to suggest that taking such differences into account may be an important component in the evaluation of the ethical properties of a given behaviour. That was actually not the point I was going for, as I’m not sure I really care a great deal about how moral systems should look like. However it does seem to me that people are taking many individual-level differences into account, to varying degrees, when making moral jugments, whether or not they ‘should’.
The basic point is that people are different, and so they have different moral systems. This is not a new idea of mine, and I’ve previously touched upon factors of relevance in this analysis; see for example this post (key point: “If you’re better able to handle complexity you’re able to make use of more complex moral algorithms.”). Another way to think about it, which also relates to the quotes above, would be to say that as people use their moral systems repeatedly to justify their own behaviours and as people behave in different ways, it’s really beyond doubt that people have different moral systems which incorporate different stuff. When looked at from that point of view, utilitarianism is really just one system (or family of systems), which appeal to some specific people due to specific reasons related to why those people are the way they are and behave the way they do. This is my observation, not an observation made in the book, but Williams does touch very briefly upon related aspects, in the sense that he talks about “the spirit of utilitarianism, and […] its demand for a rational, decideable, empirically based, and unmysterious set of values”, and at the end of his contribution charges the system with “simplemindedness”.
The social dimension alluded to in the quotes above seems relevant as well. Individuals are different from each other, but so are different groups of individuals (see e.g. vii). Groups are particularly important because stuff like social feedback systems are really important determinants of individual behaviours, and important determinants in terms of how individuals approach various questions and actions. For example people may act differently when they’re in a group than they do when they’re on their own – ethicists may or may not agree that such differences are relevant to the ethical judgment of behaviour, but there’s a potential variable lurking here which some people may consider to be important. Another related example might be that some people may search out social environments that contain people who are likely to approve of their behaviours and avoid social environments including people who do not – they may, in short, behave in a manner which may make enforcement of ethical systems more difficult. Some people may also respond differently to social feedback than do other people. If some people do consider such variables to be important when making moral judgments, and you’re planning to discuss ethics with such people, then you probably need to have some knowledge about how groups of people work, and how social aspects impact behaviour (i.e. you need to know some stuff about social psychology, sociology and related fields).
One argument here which is implicit is that if you have a moral system which makes judgments without regards to the knowledge we actually have of how people behave and why they behave the way they do, you’re likely to end up ‘left behind’ in the long run. You end up with something like religious rules, where you have a system of behavioural rules which perhaps sort of made sense, kind of, during a period where people didn’t really know anything about anything, but which makes a lot less sense now because we know better. It’s not hard to argue, though I’m sure some moral philosophers might disagree with me, that it is better to medicate the schizophrenic than to deem him mad and incarcerate him. I make this point explicit because at least judging from this book, I got the impression that the philosophical approach to how to handle ethical systems and evaluate their attributes seems to me to have many things in common with the religious approach, and much less in common with a behavioural sciences approach. Thought-experiments asking questions like how you would/should behave if you happened to find yourself in front of a guy who’s threatening to shoot 20 other people unless you shoot one of them yourself may be useful in terms of illustrating key aspects of an ethical system, but is this kind of analysis really likely to lead you very far? Some of this stuff seems to me not that different from theology. ‘People who act friendly and non-threatening in social situations are more likely to find friends and keeping them’ (or whatever) seems to me to be much more useful information, in terms of how to answer questions such as ‘what is a good (‘ethical’) way to live your life’, than are thought experiments like these and discussions about key assumptions related to those thought experiments. It seems to me that a lot of what these people are doing is adding new floors to the ivory tower and not much else.
In terms of the risk of being left behind comment above, I should note that I’m aware this is perhaps a problematic way to think about things. Some people (especially religious people, presumably) would certainly argue that it makes a lot of sense to adopt sort of a Darwinian approach to meta-ethics and consider the moral systems likely to persist and ‘survive’ to be ‘better’ than the alternatives; in which case religious systems have a lot of things going for them, in part because they’re very good at constraining thinking and suppressing certain lines of thought likely to weaken the systems (like the thought that all this stuff is just made up). Williams talks about related stuff in his coverage – his view is incidentally that such implicit constraints on moral thinking is a good thing, and he considers the absence of such constraints to be a problem with utilitarianism – I decided to include a few relevant quotes on that matter below:
“It could be a feature of a man’s moral outlook that he regarded certain courses of action […or thought, US] as unthinkable, in the sense that he would not entertain the idea of doing them […] Entertaining certain alternatives, regarding them indeed as alternatives, is itself something which he regards as dishonourable or morally absurd. But, further, he might equally find it unacceptable to consider what to do in certain conceivable situations. […] Consequentialist rationality, however, and in particular utilitarian rationality, has no such limitations: making the best of a bad job is one of its maxims”
Something I found interesting in that part is that Williams does not make clear that constraints on moral thinking have the potential to lead to both good and bad ‘outcomes’ (‘lead to ‘better’ or ‘worse’ performing moral systems’ would be a statement inclusive enough to also incorporate non-consequentialist ethical systems, it seems to me, but then a different problem related to what we mean by ‘better’ or ‘worse’ of course pops up. Anyway if you have difficulty conceptualizing this idea it probably makes sense to just model it this way: Constraints on moral thinking may stop you from thinking that it might be a good idea to kill all the jews (the argument being that where people are free to think this thought, the associated outcome becomes more likely), but such constraints may also stop you from thinking that killing jews is wrong, if you happen to live in a society where killing jews is the morally enforced norm), even though a related symmetry argument seems to be used by both proponents and opponents of utilitarianism in the context of events taking place in the far future. Note incidentally on a related, if different, note that when people make moral judgments about a given action, in terms of how long time has passed since the event in question, may have a significant influence on the judgment in question (see viii).
I do not think people use utilitarian systems of thought to decide upon which actions to engage in, and as mentioned previously neither does Smart; he’s careful to point out in his coverage that what he’s defending is a normative system, not a descriptive system. In my view people often don’t know why they do the things they do, and even when they think they do, they probably don’t, really, because there are an incredible number of aspects which are relevant, and people probably often don’t know about half of them. “But the obscurity of our real motivations doesn’t stop us from creating perfectly logical-sounding reasons for our actions, decisions, and feelings”, as pointed out in ix. We’re not rational creatures, but we are rationalizing creatures. People may use a utilitarian framework to present the decision context and the decision process, but it’s just a model. I probably differ from Smart also in the sense that Smart may be a lot more optimistic about the feasibility of even applying such a scheme than I am. Smart would probably think about a hypothetical situation in this way: ‘I have thought about this potential action X, and it seems to me that the consequences of this action X would be that one person is made much better off and another person is made slightly worse off. If I do nothing instead, no-one is made either better off or worse off. I wish to maximize average happiness, and so this action seems justified. Thus I shall now proceed to do X.’ I would be more likely to think along these lines: ‘Smart’s primate brain had decided after 2/10ths of a second that Smart wanted to do X. Smart’s primate brain is good at making Smart think he’s in charge, so now Smart’s brain will engage in a bit of work which will yield him the answer ‘he’ already decided upon.’
The utilitarian model is just a model, and/but it’s the type of model which appeal to some types of people more than others. When you look at it like this, it sort of changes how you view the question of whether the question of whether ‘people should use utilitarian systems of thought more’ even makes sense. A book like this will probably in some ways tell you more about the personalities of the authors than it will tell you about the desirability of the more widespread ‘implementation’ (whatever that may mean) of a specific ethical system of thought. There’s no data here, just arguments, so neither of the authors really have a clue, would be my contention, and they probably would not be able to agree about how to even evaluate competing systems if they did. It is not perfectly true that they ‘have no clue’, as e.g. the information problems pointed out in Williams’ account towards the end, where he talks a bit about collective decision making rather than individual-level decision making in a utilitarian framework (the point being that you need a lot of data, which is not available, in order to engage in utilitarian analyses and semi-sensible utilitarian-inspired decision making at e.g. the population level), certainly do have at least some real-world relevance, but I think it’s close enough. One aspect that really irritated me about this coverage is that although there are some potentially valuable distinctions made along the way (people may employ the correct decision rule yet end up with a bad outcome anyway, and such things may be important when making moral judgments (…or judgments about how to best set up compensation schemes in organizations, I’ll add…); when deciding whether or not to praise an action a potentially relevant distinction is to be made between the desirability of the action and the desirability of praising the action), they don’t really get very far. If I ever find myself facing a Mexican who’s about to kill 20 people, I’ll know what to do, but…
Some people might have read some of the stuff above and thought to him/herself that if you’re a hardcore consequentialist/utilitarian who does not care about anything but the consequences of actions and the utility derived from them, then you probably don’t care about whether or not the individual made the decision because he was sleep-deprived or had high levels of testosterone in his blood due to an untreated medical condition. That’s the whole point, that you disregard irrelevant factors like intentions and similar stuff, right? I have sort of assumed this would not be the utilitarian’s reply because in that case the system seems to me to devolve into a caricature very fast (on account of the ‘and similar stuff’ part, not the intentions part), where you lock away the schizophrenic. I think there’s a big difference between including in the analysis people’s explicit justifications for their actions (leading to a ‘you meant well’ judgment) and other, implicit, factors which might also have influenced behaviour (‘the cancer patient was tired and in pain, and that was why she yelled at her neighbour when his dog ran into her garden’). There’s a difference between explaining and explaining away, but they sort of go hand in hand. In case you were not aware of this, this objection does not only relate to individual-level decision-making, as objections with a similar structure can be made in the contexts of population-level decision making, where the behaviours of groups of people may also have explanations/reasons which are relevant to the ethical judgment yet unrelated to the explicit justifications people forward for behaving the way they do. I’m not sure how I feel about the validity of some of the specific arguments to be made in the latter case and how relevant they are/ought to be to the moral judgments to be made, but I did want to mention this aspect to preclude people from perhaps assuming erroneously that even if there are problems at the level of the individual, such problems go away when you start looking at groups of people instead. I don’t think this is true at all, though of course details are different in different social contexts.
I know that I have not really talked a great deal about the actual contents of the book in this post, and if you’re really curious to learn more about what’s in there you’re welcome to ask and maybe I can be persuaded to provide some more details. I was planning to perhaps include a few quotes from the book in a future quotes post, but aside from that I’m not really considering spending any more time on the book here on the blog.
Feedback to the thoughts and ideas presented here are very welcome.
The stuff below covers material from the last half of part V in Holmes et al. This previous post also dealt with this topic (the title of this post is the title of part 5 of the book, which if not hidden away in a 2000+ page textbook might well have been a book of its own; I’m quite sure you can find entire books on these topics which go into much less detail than does part 5 of this book). Some of the stuff was really hard to read; I’ve tried to include in this post mainly stuff people who have not read the rest of the coverage might be expected to understand without too much difficulty.
“Antibody to EBV [Epstein-Barr Virus] can be detected in 90-95% of the population by adulthood.10 Primary exposure often occurs in the first years of life, with seroconversion evident before the age of 5 years in 50% of children studied in the United States and Great Britain.22,23 In economically advantaged communities, primary infection may be delayed until adolescence or early adulthood,24 at which time acquisition of virus produces the clinical syndrome acute infectious mononucleosis. […] Primary EBV infection in infancy or early childhood is usually subclinical, but when delayed until the second decade of life, it manifests as infectious mononucleosis in up to 50% of patients. A self-limiting lymphoproliferative disease, the syndrome consists of fever, headache, pharyngitis, lymphadenopathy, and general malaise. Resolution of symptoms may take weeks to months, but primary infection is always followed by the establishment of a permanent viral carrier state.”
“a highly significant correlation between seropositivity for EBV, sexual intercourse, and an increasing number of sexual partners was found in a cross-sectional analysis of 1006 new University students.50 In this study, two-thirds of infectious mononucleosis cases were statistically attributable to sexual intercourse, whereas only a tenth of asymptomatic primary infections were linked to sexual activity.” [This is of course just a cross-sectional analysis, but even so – ‘kissing disease‘ may perhaps be a slightly inaccurate term…]
“The overall size of the EBV-infected B cell reservoir is largely controlled by CD8+, HLA class I-restricted, EBV-specific cytotoxic T lymphocytes (CTLs). Up to 5% of the total circulating CD8+ T cell pool may be committed to this single virus in the EBV-carrier state,110 indicating the critical role for T-cell surveillance in maintaining the host: virus balance. Impaired CTL responses in immunosuppressed patients such as transplant recipients or HIV-1-infected individuals32,111 leads to an expansion of the infected B cell population and potentially fatal lymphoproliferative disease. The contribution of virus-specific CTLs in immune regulation of EBV-induced lymphoproliferation has been made clear by recent therapeutic interventions involving adoptive transfer of virus-specific T lymphocytes to restore immunity against EBV infection in bone marrow transplant recipients.112, 113, 114”
“Non-Hodgkin’s lymphoma, an AIDS-defining cancer some 60 times more common in AIDS patients than in the general population,122 can be divided morphologically into Burkitt-like and immunoblastic lymphomas […]. Both have a higher association with EBV (30-40% and 75-80%, respectively) in AIDS than in non-AIDS groups.123,124 […] Burkitt’s lymphomas appear early in the course of AIDS, prior to profound immunosuppression, whereas immunoblastic lymphomas typically occur in late-stage AIDS when cellular immunity is compromised. […] Hodgkin’s lymphomas in the setting of AIDS contain EBV in 75-90% of the cases,140,141 reflecting the unusually high frequency in the HIV-infected population of mixed-cellularity and lymphocyte-depletion subtypes142 known to be most closely associated with EBV in the general population.15,16 Although not considered an AIDS-defining illness, Hodgkin’s lymphoma occurs with increased frequency in the setting of HIV infection. […] Our understanding of the precise role of EBV in the biology of each malignancy remains rudimentary.”
“Papillomaviruses are a group of small DNA viruses that primarily induce epithelial cell proliferation, or papillomas, in higher vertebrates. Infections are strictly genus or species specific and there is considerable tropism among the viruses for particular anatomic sites. […] The genomes of over 100 human papillomavirus (HPV) types have been molecularly cloned and completely sequenced and partial sequences for potentially up to another hundred types have been detected by PCR-based assays.9,10”
[You can skip this part without missing out on anything important:] “A basal level of transcription from the p97/p105 promoter is regulated by the keratinocyte-dependent enhancer in the NCR and can be repressed by binding of E2 to its cognate recognition sequences located adjacent to the p97/p105 TATA boxes.82,87 E2 has a role in repressing transcription from the viral promoter and its loss on viral DNA integration allows increased expression of the oncogenic proteins E6 and E7. E2 also activates transcription under some circumstances.88 Variations in binding to four different E2 binding sites may contribute to transcriptional regulation.89 Interestingly, the E8E2C protein of HPV-31 had an ability to repress transcription from a single promoter-distal E2 binding site. This activity was lacking in the full-length E2, suggesting that E8E2C has a role in regulating transcription.90 E2 binds the general transcription factor TFIIB91 and is thought to directly inhibit HPV transcription at a step subsequent to binding of TBP or TFIID to the TATA box.92 It interacts with numerous other transcription factors and chromatin modification factors including CBP,93 p/CAF,94 C/EBP,95 nucleosome assembly protein 1,96 and Top BP1.97 An interaction between bromodomain protein 4 and E2 is required for transcriptional activation by E2.98 Two crystal structure studies of the amino terminal transcription activation domain of E2 suggested that dimerization of E2 helps recruit distal transcription factors to the viral transcription complex99 …” [I decided to include a quote like this as well to indicate what kind of stuff the book is also full of, and illustrate why it was hard to read. There’s a lot of stuff in some of these chapters which I had a really hard time following.]
“Predictions about the role of HPV in neoplasia obtained from experimental studies are consistent with the natural history of cervical cancer. Among women who develop squamous intraepithelial lesions, the time from first detection of viral DNA to lesions is short, ~2 years, though factors like number of partners and infection with other STDs may influence the interval […]. Even mildly dysplastic lesions show an increase in proliferation and polyploidization. These changes result as a direct consequence of expression of E6/E7. The median age of carcinoma in situ (CIS) in the United States is 29,223,224 indicating that approximately a decade has elapsed between the initial infection and severe dysplasia. CIS lesions are characterized by their aneuploid DNA content and thus reflect the genetic instability that accompanies prolonged expression of E6/E7. The preneoplastic lesions can regress spontaneously. One principle explanation for their regression is likely to be an effective immune response, and generalized T-cell deficiency has been associated with increased dysplasia […]. Other explanations may include the fact that some, perhaps most, alterations will be deleterious, resulting in cell death. The median age of invasive cervical cancer in the United States is 49, indicating that additional changes required for invasion and metastasis are acquired slowly over time.”
“All HPV types are linked to the development of low-grade cervical SILs [squamous intraepithelial lesions],16 whereas high-grade cervical SILs are usually positive for oncogenic HPV types.272 In a cohort of young women in the United States, Moscicki et al.155 reported that 15% developed LSIL within 3 years after initial HPV infection, and in the UK, Woodman et al.70 reported that the 3-year cumulative incidence of any cytologic abnormality after initial infection was 33%. In another cohort of young women in the United States, Winer et al.132 reported that approximately 50% developed a low-grade lesion within 3 years after initial HPV infection […]. The latter study also reported that rates of lesion detection tend to increase with increased screening frequency, due to spontaneous regression of most low-grade lesions. Therefore, the shorter interval between follow-up visits (4 vs. 6 months) may explain why that study detected LSIL in a higher proportion of women. Half of newly detected low-grade lesions appear to regress within 6-9 months.70,132,273 It also appears that vaginal SIL is not uncommon among women with incident HPV infections. While less commonly detected than cervical SIL, one study reported that almost 30% of women with incident HPV infections developed vaginal SIL within 3 years (Fig. 28-3B), and the median duration of these lesions was less than 5 months.132
Results from natural history studies suggest that LSILs are transient manifestations of productive HPV infections, whereas HSILs are cervical cancer precursor lesions. This is in contrast to the previously held theory that cervical carcinogenesis always follows a progression from persistent HPV infection to development of low- and then high-grade lesions. Several recent studies have shown that cervical highgrade lesions are a relatively early manifestation of HPV infections in young women. Woodman et al.70 reported that the risk of high-grade lesions was highest in the first 6 months after initial HPV infection. It has also been shown that histologically confirmed high-grade lesions are common after infection with HPV 16 and 18 infections,132,274 with one study reporting that 27% of women with incident HPV 16 or 18 infections developed histologically confirmed CIN grade 2 or 3 within 3 years […] and that the median time to development was 14 months.132”
“Although viral hepatitis is unquestionably an ancient disease, it is only in the past 40 years that an appreciation has emerged of the diversity of infectious agents capable of causing the clinical syndrome of acute hepatitis. […] at least five distinctly different human viruses (classified as hepatitis A through E) are now generally recognized to be causative agents of acute and/or chronic viral hepatitis […].
Hepatitis A virus (HAV), hepatitis B virus (HBV), hepatitis C virus (HCV), hepatitis delta virus (HDV), and hepatitis E virus (HEV) all share a remarkable tropism for the liver despite profound differences in their physical structure, pathobiology, and epidemiology. Each of these viruses is a cause of clinically overt acute hepatitis associated with frank jaundice. The severity of the liver disease, which frequently accompanies acute infection with these viruses, generally distinguishes them from cytomegalovirus and Epstein-Barr virus, which typically cause much milder liver dysfunction during primary infections. Within the United States, almost all cases of acute hepatitis are caused by infection with HAV (51%), HBV (40%), or HCV (9%).3 These acute hepatitis virus infections cannot be distinguished from each other without serologic testing. Acute hepatitis represents a considerable disease burden within the United States, with an estimated 40,000-60,000 clinical cases occurring annually during the past 5 years. Only a fraction of these cases are reported to public health authorities, and a substantial number of additional infections do not come to medical attention because they are asymptomatic. Fulminant hepatic failure and death occur in a very small proportion of patients with acute hepatitis A or B, but these clinical endpoints are rarely associated with acute HCV infection in the United States.4,5”
“The major burden of disease due to hepatitis virus infections stems from the chronic liver damage that occurs in individuals who develop persistent infections. The proportion of persons who become persistently infected is highly dependent on the infecting virus. Persistent infections with HAV are not well documented and may never occur. On the other hand, more than 50% of persons infected with HCV fail to clear the virus and most eventually develop biochemical and histologic evidence of chronic liver disease.6 HBV has an intermediate tendency to establish persistence, with the risk of persistent infection being highly dependent on the age at the time of infection and immunologic competence of the individual.
It is appropriate to focus attention on the hepatitis viruses in a textbook concerned with sexually transmitted diseases (STDs). Although these are systemic infections that are also commonly transmitted by other means, HBV is a sexually transmitted infection, and sexual activity may profoundly influence the risks for acquisition of HAV. To a lesser extent, sexual behavior may also influence the risk of infection with HCV and HDV.”
“Several factors account for the high risk of HBV infection among MSM [Males who have Sex with Males], which was noted in […] early studies. One of the most important factors was the number of sexual partners. The typical homosexually active male frequenting Denver’s steam baths in the late 1970s had eight different male sexual contacts per month.206 These contacts were largely anonymous and could total as many as 1000 over the lifetime of a gay man.”
“The endemicity of HBV infection varies greatly worldwide and is influenced primarily by the predominant age at which infection occurs.121,190 Endemicity of infection is considered high in those parts of the world where at least 8% of the population is HBsAg positive, and 70-90% of the population has serological evidence of previous HBV infection. Almost all infections occur during either the perinatal period or early in childhood, which accounts for the high rates of chronic HBV infection in these populations. Risk of HBV infection continues after the first 5 years of life, but its eventual contribution to the high rate of chronic infection is less significant. Chronic infection with HBV is strongly associated with HCC [liver cancer], and areas with a high endemicity of chronic HBV infection have the highest death rates from this neoplasm. […]
In most developed parts of the world, including the United States and Western Europe, the prevalence of chronic HBV infection is <1%, and the overall infection rate is 5-7%. The highest incidence of acute hepatitis B is among young adults, and high-risk sexual activity and injectable drug use account for most cases of newly acquired hepatitis B.191, 192, 193, 194 In the United States, heterosexual activity accounts for 40% of new hepatitis B cases, while MSM represent 15% of new cases.”
“HCV infection accounts for 15% of acute viral hepatitis cases within the United States. However, it is by far the leading cause of chronic viral hepatitis and is present in over 40% of persons with chronic liver disease. The morbidity and mortality associated with HCV infection are due to its unique propensity to cause persistent infection in most persons, a feature that distinguishes this virus from other hepatitis viruses. The specific mechanisms underlying viral persistence are not known.
Although it has been controversial, the balance of evidence now favors the occasional sexual transmission of HCV. The risk of infection with HCV, like HBV, has been independently related to numbers of sexual partners in some STD clinic studies.226,285 Although the risk of HCV infection has been shown to correlate with numbers of partners and/or specific sexual practices in some studies of homosexual men,286,287 the risk of infection is overwhelmingly more closely tied to injection drug use. […] About 60-85% of all infections lead to virus persistence, and this is often associated with evidence of chronic liver disease […]. After many years, this process may culminate in cirrhosis and liver failure, or the development of hepatocellular carcinoma. These end-stage events in chronic hepatitis C may claim as many as 10,000 lives annually in the United States.326 […] As many as a third of patients found to have chronic hepatitis C will ultimately develop cirrhosis6,284,333,334; although it is not well defined, the fraction of all patients who are infected with the virus and progress to cirrhosis is undoubtedly far lower. Cirrhosis may be present within as little as 60 months of the initial infection but is identified typically in persons who have been infected for decades. Factors associated with disease progression include age at infection, regular alcohol consumption, coinfection with HIV or HBV, and more recently obesity and insulin resistance.335, 336 Some patients, usually with well-established cirrhosis, develop primary hepatocellular carcinoma.283 Chronic HCV infection is the most important etiology of hepatocellular carcinoma in western countries.
The major challenge to clinical investigators has been the discovery of specific markers that are predictive of progression of chronic hepatitis C to a clinically significant disease state. This remains an exceptionally difficult problem. There is no good correlation between biochemical markers and the extent of fibrosis or the presence or absence of cirrhosis. Indeed, many patients with cirrhosis have no obvious laboratory abnormalities.333 In addition, quantitative measurements of the viremia (“virus load”) are not useful in determining the extent of disease resistance.335, 336”
(Sorry for the long wait for another update; in general I’d say I try quite hard not to let more than three days pass between updates, but due to personal stuff I wasn’t really able to find the time to blog anything over the last few days.)
The book mentioned in the post title is the sixth novel in the Thursday Next series by Jasper Fforde. I liked it better than the fifth book in the series, and this one is probably in my top three of the books in this series. In general I really like the books in this series; Fforde is playing around with a lot of ‘meta’ stuff other books are not playing around with, and the best way to illustrate this stuff is probably through quotes (so I’ve added some of those below). Part of why I liked this book better than the previous one is also that it further develops the universe in which the action takes place; in a way the previous book did not really do this, at least not in my opinion to nearly the same extent.
If you plan on reading this series, you should start from the beginning; you’ll get a lot less out of this book than you otherwise would if you are not familiar with the context.
I’ve added a little stuff from the book below – I have tried hard to not include any spoilers. The book is full of completely absurd stuff, and it has so much quote-worthy stuff that it would be easy for me to write at least another post or two like this one.
“‘I’m Alyona Ivanovna,’ said the third Russian with a trace of annoyance, ‘the rapacious old pawnbroker whose apparent greed and wealth lead you to murder.’
‘Are you sure you’re Ivanovna?’ asked Raskolnikov in a worried tone.
‘And you’re still alive?’
‘So it seems.’
He stared at the bloody axe.
‘Then who did I just kill?’
And they all looked at each other in confusion.”
“To a text-based life-form, unpredictable syntax and poor grammar are sources of huge discomfort. Ill-fitting grammar are like ill-fitting shoes. You can get used to it for a bit, but then one day your toes fall off and you can’t walk to the bathroom. Poor syntax is even worse. Change word order and sentence useless that for anyone Yoda except you have.”
“My book was first-person narrative, and if I wanted to have any sort of life outside my occasional readings – such as a date with Whitby or a secondary career – I needed someone to stand in for me.”
“reality was a pit of vipers for the unwary. Forget to breathe, miscalculate gravity or support the wrong god or football team and they’d be sending you home in a zinc coffin.”
“We fell silent for a moment as the tram rumbled on. I didn’t tell him that I yearned for the most under-appreciated luxury of the human race – free will. My life was by definition preordained. I had to do what I was written to do, say what I was written to say, without variance all day every day, whenever someone read me. Despite conversations like this where I could think philosophically rather than narratively, I could never shrug off the peculiar feeling that someone was controlling my movements, and eavesdropping on my every thought.”
“The queue to get out of Poetry was long, as always. The smuggling of Metaphor out of the genre was a serious problem […] The increased scarcity of raw Metaphor in Fiction had driven prices sky high, and people would take unbelievably foolish risks to smuggle it across. I’d heard stories of Metaphor being hidden in baggage, swallowed, even dressed up to look like ordinary objects whose meanings were then disguised to cloak the Metaphor. The problem then was trying to explain why you had a ‘Brooding Thunderstorm’ or ‘Broad sunlit uplands’ in your luggage. […] Distilling Metaphor out of raw euphemism was wasteful and expensive”
“‘The less people that know the better.’
‘Fewer. The fewer people that know the better.’
‘That’s what I meant.’
‘That’s what who meant?’
‘Wait – who’s speaking now?’
‘I don’t know.’
‘You must know.’
‘Damn. It must be me – you wouldn’t say “Damn”, would you?’
We both sat there for an empty moment, waiting for either a speech marker or a descriptive line. It was one of those things that happened every now and then in the Bookworld – akin to an empty, pregnant silence in the middle of an Outland dinner party. […] The taxi slowed down and stopped as the traffic ground to a halt. The cabby made some enquiries and found that a truckload of their had collided with a trailer containing there going in the opposite direction, and had spread the contents across the road.
‘Their will be a few hiccups after that,’ said the cabby, and I agreed. Homophone mishaps often seeped out into the RealWorld and infected the Outlanders, causing theire to be all manner of confusions.”
“Comedy was never straightforward. When all the good jokes had left, only the dubiously amusing stuff remained. Was the mimefield funny or not? To us, I think not. But it might have been funny to someone.” (well, this reader laughed…)
“I’m on leave and certainly not stealing military equipment, no ma’am.’ […] The clown sighed resignedly and opened his kitbag to reveal boxes of Military Grade Custard Pies. He wasn’t a very good smuggler. Few were.”
“‘[The Realworld is] highly disorderly,’ he explained, ‘not like here. There is no easily definable plot and you can run yourself ragged wondering what the significance of a chance encounter can be. You’ll also find that for the most part there is no shorthand to the narrative, so everything happens in a long and painfully drawn-out sequence. Apparently, the talk can be confusing – in general, most people just say the first thing that comes into their head.’
‘Is it as bad as they say?’
‘I’ve heard it’s worse. Here in the Bookworld we say what needs to be said for the story to proceed. Out there? Well, you can discount at least eighty per cent of chat as just meaningless drivel. […] The people to listen to are the ones who don’t say very much. […] above all, don’t be annoyed or distracted when random things happen to absolutely no purpose.’
‘There’s always a purpose,’ I said, amused by the notion of utter pointlessness, ‘even if you don’t understand what it is until much later.’
‘That’s a big difference between here and there,’ said Plum. ‘When things happen after a randomly pointless event, all that follows is simply unintended consequences, and not a coherent narrative thrust that propels the story forward.’
I rolled the idea of ‘unintended consequences’ around in my head.
‘Nope,’ I said finally, ‘you’ve got me on that one.’
‘It confuses me too,’ admitted Plum, ‘but that’s the RealWorld for you.'”
“It felt like covering for a character in a book without being told what the book was about, who was in it, or even what your character had been doing up until then. I’d done it twice in the BookWorld, so had some experience in these matters.”
“‘What about Red Herring, ma’am?’
‘I’m not sure. Is Red Herring a red herring? Or is it the fact that we’re meant to think Red Herring is a red herring that is actually the red herring?’
‘Or perhaps the fact that you’re meant to think Red Herring isn’t a red herring makes Red Herring a red herring after all.’
‘We’re talking serious meta-herrings here. Oh, craps, I’m lost again. Who’s talking now?'”
“‘Who is that?’ I asked as a man with his face obscured by a large pair of dark glasses hurried past and went below decks, followed by a porter carrying his suitcases.
‘He’s the mandatory MP-MC12: Mysterious Passenger in Cabin Twelve. All sweaty journeys upriver have to carry the full complement of odd characters. It’s a union thing.'”
The stuff below is excerpts and quotes from Pojman‘s reprint of Ayer’s Language, Truth and Logic. I wasn’t quite sure how to blog this, and in particular I was not sure if I should wait with covering Ayer to I’d also finished reading Russell (I’m currently reading Russell’s The Problems of Philosophy from the same book. I may decide to read William James later on as well), covering both in one post. I somewhat dislike writing long posts based on paper-books, so in the end I decided to just post what I’d got. I haven’t really commented much on the stuff below and given my own views, partly because I’m lazy, but I guess I don’t mind going on the record here as someone who’s probably not exactly a big fan of metaphysics (…to the extent that I even know what it is; it’s not like I’ve wasted a lot of time on this kind of stuff.).
“Like Hume, I divide all genuine propositions into two classes: those which, in his terminology, concern “relations of ideas,” and those which concern “matters of fact.” The former class comprises the a priori propositions of logic and pure mathematics, and these I allow to be necessary and certain only because they are analytic. That is, I maintain that the reason why these propositions cannot be confuted in eperience is that they do not make any assertion about the empirical world, but simply record our determination to use symbols in a certain fashion. Propositions concerning empirical matters of fact, on the other hand, I hold to be hypotheses, which can be probable but never certain. […] no proposition, other than a tautology, can possibly be more than a probable hypothesis. […] A hypothesis cannot be conclusively confuted any more than it can be conclusively verified.”
“I adopt what may be called a modified verification principle. For I require of an empirical hypothesis, not indeed that it should be conclusively verifiable, bu that some possible sense-experience should be relevant to the determination of its truth or falsehood. If a putative proposition fails to satisfy this principle, and is not a tautology, then I hold that it is metaphysical, and that, being metaphysical, it is neither true nor false but literally senseless. […] much of what ordinarily passes for philosophy is metaphysical according to this criterion […] The traditional disputes of philosophers are, for the most part, as unwarranted as they are unfruitful. [I included this quote in the recent quotes post as well] […] philosophy, as a genuine branch of knowledge, must be distinguished from metaphysics.”
“the fact that a conclusion does not follow from its putative premise is not sufficient to show that it is false.”
“We say that the question that must be asked about any putative statement of fact is not, Would any observations make its truth or falsehood logically certain? but simply, Would any observations be relevant to the determination of its truth or falsehood? And it is only if a negative answer is given to this second question that we conclude that the statement under consideration is nonsensical.”
“It must, of course, be admitted that our senses do sometimes deceive us. We may, as the result of having certain sensations, expect certain other sensations to be obtainable which are, in fact, not obtainable. But, in all such cases, it is further sense-experience that informs us of the mistakes that arise out of sense-experience. We say that the senses sometimes deceive us, just because the expectations to which our sense-experiences give rise do not always accord with what we subsequently experience. That is, we rely on our senses to substantiate or confute the judgments which are based on our sensations. […] Consequently, anyone who comdemns the sensible world as a world of mere appearance, as opposed to reality, is saying something which, according to our criterion of significance, is literally nonsensical.” [On a related note I have always found it hard to figure out what’s the point of discussions about ‘whether we really live in a simulation or not’, and I’m slightly mystified by how many presumably very smart and in other respects seemingly sensible people consider exploring such questions in depth to be a good use of their limited time here on Earth.]
“existence is not an attribute. For, when we ascribe an attribute to a thing, we covertly assert that it exists: so that if existence were itself an attribute, it would follow that all positive existential propositions were tautologies, and all negative existential propositions self-contradictory; and this is not the case. […] In general, the postulation of real non-existent entities results from the superstition […] that, to every word or phrase that can be the grammatical subject of a sentence, there must somewhere be a real entity corresponding. For as there is no place in the empirical world for many of these “entities”, a special non-empirical world is invoked to house them. To this error must be attributed, not only the utterances of a Heidegger, who bases his metaphysics on the assumption that “Nothing” is a name which is used to denote something peculiarly mysterious, but also the prevalence of such problems as those concerning the reality of propositions and universals whose senselessness, though less obvious, is no less complete. […] The metaphysician […] does not intend to write nonsense. He lapses into it through being deceived by grammar, or through committing errors of reasoning”
i. “A part of kindness consists in loving people more than they deserve.” (Joseph Joubert)
ii. “There is only one thing about which I am certain, and this is that there is very little about which one can be certain.” (W. Somerset Maugham)
iii. “Sometimes people carry to such perfection the mask they have assumed that in due course they actually become the person they seem.” (-ll-)
iv. “… habits in writing as in life are only useful if they are broken as soon as they cease to be advantageous.” (-ll-)
v. “We are not the same persons this year as last; nor are those we love. It is a happy chance if we, changing, continue to love a changed person.” (-ll-)
vi. “It was not till quite late in life that I discovered how easy it is to say: “I don’t know.”” (-ll-)
vii. “Nothing in the world is permanent, and we’re foolish when we ask anything to last, but surely we’re still more foolish not to take delight in it while we have it.” (-ll-)
viii. “The traditional disputes of philosophers are, for the most part, as unwarranted as they are unfruitful.” (Alfred Ayer)
ix. “The less justified a man is in claiming excellence for his own self, the more ready he is to claim all excellence for his nation, his religion, his race or his holy cause.” (Eric Hoffer)
x. “Passionate hatred can give meaning and purpose to an empty life. Thus people haunted by the purposelessness of their lives try to find a new content not only by dedicating themselves to a holy cause but also by nursing a fanatical grievance. A mass movement offers them unlimited opportunities for both.” (-ll-)
xi. “The uncompromising attitude is more indicative of an inner uncertainty than of deep conviction. The implacable stand is directed more against the doubt within than the assailant without.” (-ll-)
xii. “When people are free to do as we please, they usually imitate each other.” (-ll-)
xiii. “To most of us nothing is so invisible as an unpleasant truth. Though it is held before our eyes, pushed under our noses, rammed down our throats — we know it not.” (-ll-)
xiv. “We lie loudest when we lie to ourselves.” (-ll-)
xv. “Kindness can become its own motive. We are made kind by being kind.” (-ll-)
xvi. “Wisdom never comes to those who believe they have nothing left to learn.” (Charles de Lint)
xvii. “Any impatient student of mathematics or science or engineering who is irked by having algebraic symbolism thrust on him should try to get on without it for a week.” (Eric Temple Bell)
xviii. “There are many things that seem impossible only so long as one does not attempt them.” (André Gide)
xix. “The most decisive actions of our life — I mean those that are most likely to decide the whole course of our future — are, more often than not, unconsidered.” (-ll-)
xx. “Experience is what you get when you didn’t get what you wanted.” (Randy Pausch)
“[H]uman papilloma virus, hepatitis B virus, hepatitis C virus, Epstein-Barr virus, human herpes virus 8, human T-cell lymphotropic virus 1, human immunodeficiency virus, Merkel cell polyomavirus, Helicobacter pylori, Opisthorchis viverrini, Clonorchis sinensis, Schistosoma haematobium […] are recognized as carcinogens and probable carcinogens by [the] International Agency for Research on Cancer (IARC). They are not considered in this book […] The aim of this monograph is to analyze associations of other infectious agents with cancer risk […] virology is not considered in our monograph: although there are some viruses that can be connected with cancer but are not included into the IARC list (John Cunningham virus, herpes simplex virus-1 and -2, human cytomegalovirus, simian virus 40, xenotropic murine leukemia virus-related virus), we decided to leave them for the virologists and to concentrate our efforts on other infectious agents (bacteria, protozoa, helminths and fungi) […] To the best of our knowledge, this is the first book devoted to this problem”
Here’s what I wrote on goodreads:
“This book is written by three Russian researchers, and you can tell; the language is occasionally hilariously bad, but it’s not too difficult to figure out what they’re trying to say. The content partially made up for the poor language, as the book covers quite a bit of ground considering the low page count.”
I gave the book two stars. I’m glad they wrote the book, because it covered some stuff I didn’t know much about. I think I’m closer to one star than three, but it’s mostly because it’s terribly written, not because I have major objections to the coverage as such. What I mean by this is that they talk about a lot of studies and they include a lot of data – they’re scientists who write about scientific research, they just happen to be Russian scientists who are not very good at English. It’s terribly written, but the stuff is interesting.
As mentioned above there are quite a few viruses which we know may lead to cancer in humans. I’ve recently read a lot of stuff about this topic as it was covered in both Boffetta at el. and also rather extensively in part 5 of the Sexually Transmitted Diseases text, which covered sexually transmitted viral pathogens (that section of the book was with its 230 pages actually a ‘book-length section’; it was significantly longer than this book is..). I’ve even covered some of that stuff here on the blog, e.g. here. I may incidentally write more about these things and related stuff later, as I’m quite far behind in terms of my intended coverage of the STD book at the moment.
Anyway, viruses aren’t the only bad guys around. So these guys decided to write a book about some other infectious diseases affecting humans, and how these infectious diseases may relate to cancer risk. As they point out in the book, “there is only one bacterium, Helicobacter pylori, which is recognized by IARC as an established human carcinogen.” After reading this book you’ll realize that there are some others which perhaps look a bit suspicious as well. In some cases a lot of studies have been done and you have both animal-models, lab-analyses, case-control studies, cohort studies, … In other cases you have just a few small studies to judge from. As is always the case when people have a close look at epidemiological research, this stuff is messy. Sometimes studies that looked really convincing turn out to not replicate in larger samples, sometimes dramatically different effect sizes are found in different areas of the world (which may of course both be interpreted as an indicator that the ‘true’ effect sizes are different in the different subpopulations, or it may be interpreted as a result e.g. of faulty study design which makes those Swedish data look really fishy..), sometimes different results can be explained by differences in data quality/type of data applied/etc. (classic cases are different effects based on whether you rely on self reports or biological disease markers, and different results from analyses of bacterial cultures vs PCR analyses), and so on and so forth. There are a lot of details, and they cover them in the book. I occasionally see people criticize epidemiological research online on the grounds that many (‘all?’) results published in this area are just random correlations without any deeper meaning. Sometimes this criticism may well be warranted, and the authors of this book certainly in some cases seem to go quite a bit further than I would do based on the same data. But there’s another part of the story here. When you start out with a couple of case-control studies indicating that guys with cancer type X are more likely to have positive lab cultures for this specific micro-organism, that may not be a big deal. But perhaps then a few microbiologists show up and tell you that it would actually make a lot of sense if there was a connection here (and they might start talking about fancy stuff like various ‘modulations of host immune responses’, ‘inflammatory markers’, ‘the role of nitric oxides’, …). They conduct some studies as well and perhaps one of the things they find is that the observed cancer grades in the patients seem to depend quite a lot upon which of the pathogen subtypes the individual happen to be infected with (perhaps suddenly also providing an explanation for some previously surprising negative results in specific cases). And then perhaps you get a couple of animal studies that show that these animals get cancer when you infect them with these bugs and don’t treat the infection. Perhaps you have a few more studies as well in different populations, because Chinese people get cancer too, and you start seeing that people around the world who happen to be infected with these bugs are all more likely to get cancer, compared to the locals who are not infected (…or perhaps not, and then it just gets more fun…). This process goes on for a while, until at some point it starts getting really hard to think these positive correlations are all just the result of random p-value hunting done by bored researchers who don’t know what else to do with their time, and you start asking yourself if perhaps this idea is not as stupid as it was when you first encountered it. Most of the time the process stops before then because the proposed link isn’t there, but modern epidemiology is not just random collections of correlations.
In the context of the specific infectious diseases covered in the book the people who have in some sense the final say in these things (the IARC) think we’re not quite there yet, but you have some cases where some different lines of evidence all seem to indicate that a link may be present and relevant. It would be highly surprising to me if in 20 years time we’d have realized that none of the infectious diseases they talk about in this book are at all involved in cancer pathogenesis. A related point is that most likely we’ve missed some ‘true connections’ along the way, and will continue to do so in the future, because even if a link is there, it’s sometimes really hard to find it and easy to overlook it, for many different reasons.
I have quoted a bit from the book below and added some comments here and there. I have corrected some of the spelling/language errors the authors made to ease reading; if a word is placed in brackets, it’s an indicator that I’ve replaced a misspelled word by the correct one (‘they meant to use’). The authors do not even have any clue how and when to use the word ‘the’, often using it when it’s not needed and forgetting to use it in cases where it is needed, which made quoting from the book painful. Read it for the content.
“Chronic inflammation substantially increases the probability of neoplastic transformation of the surrounding cells, inducing mutations and epigenetic alterations by the activity of inflammatory molecules […] through the formation of free radicals and DNA damage […] Since infectious agents persisting in the organism may cause chronic inflammation, they can also promote local carcinogenesis. […] Chronic inflammation can also specifically affect the functioning of [an] organ, for instance, promoting cholelithiasis and urolithiasis that increase the time of exposure of the gallbladder, bile ducts, urinary bladder and ureters to chemical carcinogens and carcinogenic bacteria. […] In addition to […] metabolic and immune mechanisms, a number of bacteria […] and protozoa […] [produce] or [contain] in their cell wall their own toxins […] possessing [carcinogenic] activity, affecting cell-cell interactions, intracellular signal transduction or induction of mutations and epigenetic alterations that can influence vital cell processes (apoptosis, proliferation, survival, growth, differentiation, invasion). Intracellular protozoan (Toxoplasma gondii) may induce resistance to multiple mechanisms of apoptosis […]. So, bacterial and [protozoan toxins] may function like initiating or like promoting agents.”
“Typhoid fever, which is a systemic infection caused by Salmonella enterica serovar Typhi (S. typhi), is a major health problem in developing countries. There are approximately 21.6 million cases of typhoid fever worldwide and an estimated 200,000 deaths every year. It is known that S. typhi may colonize the gallbladder, causing […] chronic inflammation. Welton et al. (1979) were the very first to [establish] an association between the typhoid-carrier state and death due to malignancies of the hepatobiliary tract. They recruited 471 U.S. carriers of [S. typhi] , matched them with 942 controls and demonstrated that chronic typhoid carriers died of hepatobiliary cancer six times more often than the controls. […] The absence of basic research analyzing the carcinogenic properties of S. typhi does not allow placing it in the short list of the infectious agents that may be a cause of cancer development but are not included in the IARC roster, but this bacterium undoubtedly should be [on] the extended list. If [basic] studies on cell lines and animal models [support] the results of [the] epidemiological investigations, S. typhi can be placed [on] the short list.” [I included this in part because it is one of several examples in the book of how even strong correlations and high relative risks are not considered sufficient on their own by epidemiologists to settle matters. Some relative risks in other studies have been even higher – a study on gall-bladder cancer found an RR of 12.7].
“Tuberculosis (TB), a destructive disease [affecting] the lungs […] is a major global health burden, with about nine million of new cases and 1.1 million deaths annually. When the host protective immunity fails to control M. tuberculosis growth, progression to active disease occurs. […] According to the data of the last comprehensive systematic review and [meta-analysis] published by Brenner et al. (2011), there were 30 studies […] conducted in North America, Europe and Asia, which investigated the association of tuberculosis on lung cancer risk with adjustment for smoking. The relative risk (RR) of lung cancer development among patients with TB history was 1.76 (95% CI = 1.49–2.08).”
“22 studies from North America, Europe and East Asia [have] investigated the association between pneumonia and lung cancer risk while adjusting for smoking […] A significant increase in lung cancer risk was observed among all studies (RR = 1.43, 95% CI = 1.22–1.68). […] To sum up, there are basic as well as extensive epidemiological evidence that С. pneumoniae may cause lung cancer” [However effect sizes seem to be different in different countries. I was skeptical about this one in part because a non-smoker’s absolute risk of getting lung cancer is very low, meaning that relative risks in the neighbourhood reported above although statistically significant probably are clinically insignificant. How pneumonia and smoking interact seems to me a much more important question. Then again we haven’t got an explanation for all of the non-smoking-related lung cancers yet, and they are caused by something, so it’s also not like researching this is a complete waste of time.]
“Primary infection with C. trachomatis [Chlamydia], the most prevalent sexually transmitted bacterium worldwide with an estimated 90 million new cases occurring each year, is often asymptomatic and may persist for several months or years. The first study analyzing possible association of C. trachomatis with cervical cancer was carried out by Schachter et al. (1975) who assessed the prevalence of antibodies to TRIC (trachoma-inclusion conjunctivitis) agents in women with cervical dysplasia and in women attending selected clinics […]. According to this investigation, antibodies to chlamydiae were identified in 77.6% of the women with dysplasia or cervical cancer whereas antichlamydial antibodies were less prevalent in the other clinic populations. Four years later, Paavonen et al. (1979) obtained [similar] results in 93 of patients with cervical dysplasia comparing them to the controls. […] Smith et al. (2001, 2002) examined 499 women with incident invasive cervical cancer cases and 539 control patients from Brazil and the Philippines, detecting that C. trachomatis increased risk of squamous cervical cancer among HPV-positive women (OR=2.1; 95% CI=1.1–4.0). The results were similar in both countries.” [As I recently pointed out elsewhere, “Chronic infection with HPV is a necessary cause of cervical cancer. Using sensitive molecular techniques, virtually all tumours are positive for the virus.” But as this finding (and other related findings) indicate, other infectious processes may play a role as well in HPV-related cancers. Synergistic effects are common in this area (recall for example also the herpes simplex virus-HIV link).]
“Trichomonas vaginalis (T. vaginalis), a protozoan parasite, is the causative agent of trichomoniasis, the most common nonviral sexually transmitted disease in humans. This parasite has a worldwide distribution and it infects 250–350 million people worldwide. [wiki says ~150 mil, but these guesstimates should always be taken with a grain of salt. Either way it affects a lot of people] […] Zhang et al. (1995) observed a relationship between T. vaginalis infection and cervical cancer in [their] prospective study in a cohort of 16,797 Chinese women. T. vaginalis-infection correlated with higher cervical cancer risk (RR=3.3, 95% CI=1.5–7.4). In a large cohort study conducted in Finland by Viikki et al. (2000) T. vaginalis was associated with a high RR of cervical cancer, 6.4 (95% CI = 3.7–10) and SIR [standardized incidence ratio]=5.5 (95% CI=4.2–7.2s), respectively. […] Mekki and Ivić (1979), detected that T. vaginalis were of a significantly smaller diameter in invasive carcinoma and carcinoma in situ in comparison with dysplasia. In the control group with trichomoniasis alone, the diameter of T. vaginalis was twice as large as that in carcinoma and larger compared to dysplasia, indicating that small forms of T. vaginalis are more carcinogenic than large ones. […] To sum up, there are basic as well as epidemiological evidence that T. vaginalis may be a cause of cervical and prostate cancer […] For cervical cancer it is evident, for prostate cancer it is arguable. According to our criteria, it is possible to include it in the short list of the infectious agents that may be a cause of cancer development but are not placed in the IARC roster.”
“At the moment of publication, IARC [recognizes] Schistosoma haematobium [and] [S. mansoni], Opisthorchis viverrini, and Clonorchis sinensis as causative agents of cancer, leaving a possibility to enlarge this list by [Schistosoma japonicum] [and] Opisthorchis felineus.” [The authors think the list should be enlarged even more, but I did not find their helmith data/coverage very convincing (not much research has been done in this area), so I decided not to cover these things here].
i. Albert Stevens.
“Albert Stevens (1887–1966), also known as patient CAL-1, was the subject of a human radiation experiment, and survived the highest known accumulated radiation dose in any human. On May 14, 1945, he was injected with 131 kBq (3.55 µCi) of plutonium without his knowledge or informed consent.
Plutonium remained present in his body for the remainder of his life, the amount decaying slowly through radioactive decay and biological elimination. Stevens died of heart disease some 20 years later, having accumulated an effective radiation dose of 64 Sv (6400 rem) over that period. The current annual permitted dose for a radiation worker in the United States is 5 rem. […] Steven’s annual dose was approximately 60 times this amount.”
“Plutonium was handled extensively by chemists, technicians, and physicists taking part in the Manhattan Project, but the effects of plutonium exposure on the human body were largely unknown. A few mishaps in 1944 had caused certain alarm amongst project leaders, and contamination was becoming a major problem in and outside the laboratories. […] As the Manhattan Project continued to use plutonium, airborne contamination began to be a major concern. Nose swipes were taken frequently of the workers, with numerous cases of moderate and high readings. […] Tracer experiments were begun in 1944 with rats and other animals with the knowledge of all of the Manhattan project managers and health directors of the various sites. In 1945, human tracer experiments began with the intent to determine how to properly analyze excretion samples to estimate body burden. Numerous analytic methods were devised by the lead doctors at the Met Lab (Chicago), Los Alamos, Rochester, Oak Ridge, and Berkeley. The first human plutonium injection experiments were approved in April 1945 for three tests: April 10 at the Manhattan Project Army Hospital in Oak Ridge, April 26 at Billings Hospital in Chicago, and May 14 at the University of California Hospital in San Francisco. Albert Stevens was the person selected in the California test and designated CAL-1 in official documents. […] The plutonium experiments were not isolated events. During this time, cancer researchers were attempting to discover whether certain radioactive elements might be useful to treat cancer. Recent studies on radium, polonium, and uranium proved foundational to the study of Pu toxicity. […] The mastermind behind this human experiment with plutonium was Dr. Joseph Gilbert Hamilton, a Manhattan Project doctor in charge of the human experiments in California. Hamilton had been experimenting on people (including himself) since the 1930s at Berkeley. […] Hamilton eventually succumbed to the radiation that he explored for most of his adult life: he died of leukemia at the age of 49.”
“Although Stevens was the person who received the highest dose of radiation during the plutonium experiments, he was neither the first nor the last subject to be studied. Eighteen people aged 4 to 69 were injected with plutonium. Subjects who were chosen for the experiment had been diagnosed with a terminal disease. They lived from 6 days up to 44 years past the time of their injection. Eight of the 18 died within 2 years of the injection. All died from their preexisting terminal illness, or cardiac illnesses. […] As with all radiological testing during World War II, it would have been difficult to receive informed consent for Pu injection studies on civilians. Within the Manhattan Project, plutonium was referred to often by its code “49” or simply the “product.” Few outside of the Manhattan Project would have known of plutonium, much less of the dangers of radioactive isotopes inside the body. There is no evidence that Stevens had any idea that he was the subject of a secret government experiment in which he would be subjected to a substance that would have no benefit to his health.”
The best part is perhaps this: Stevens was not terminal: “He had checked into the University of California Hospital in San Francisco with a gastric ulcer that was misdiagnosed as terminal cancer.” It seems pretty obvious from the fact that one of the people involved in these experiments survived for 44 years and the fact that four other experimentees were still alive by the time Stevens died that he was not the only one who was misdiagnosed, and one interpretation of the fact that more than half survived beyond two years might be that the definition of ‘terminal’ applied in this context may have been, well, slightly flexible (especially considering how large injections of radioactive poisons in these people may not exactly have increased their life expectancies). Today people usually use this term for conditions which people can expect to die from within 6 months – 2 years is a long time in this context. It may however also to some extent just have reflected the state of medical science at the time – also illustrative in that respect is how the surgeons screwed him over during his illness: “Half of the left lobe of the liver, the entire spleen, most of the ninth rib, lymph nodes, part of the pancreas, and a portion of the omentum… were taken out” to help prevent the spread of the cancer that Stevens did not have.” In case you were wondering, not only did they not tell him he was part of an experiment; they also did not ever tell him he had been misdiagnosed with cancer.
ii. Aberration of light.
“The aberration of light (also referred to as astronomical aberration or stellar aberration) is an astronomical phenomenon which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. Aberration causes objects to appear to be angled or tilted towards the direction of motion of the observer compared to when the observer is stationary. The change in angle is typically very small, on the order of v/c where c is the speed of light and v the velocity of the observer. In the case of “stellar” or “annual” aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth’s velocity changes as it revolves around the Sun […] Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of Special Relativity. […] In 1729, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley’s theory was incompatible with 19th century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrick Lorentz‘ aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz’ elaboration of Maxwell’s electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of Special Relativity in 1905, which provided a conclusive explanation for the aberration phenomenon. […]
Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to the apparent direction of falling rain: If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed.
The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer’s frame. This effect is sometimes called the “searchlight” or “headlight” effect.
In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth’s moving frame is tilted relative to the angle observed in the Sun’s frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun.
While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes […] The theory of Special Relativity is required to correctly account for aberration.”
The article has much more, in particular it has a lot of stuff about historical aspects pertaining to this topic.
iii. Spanish Armada.
“The Spanish Armada (Spanish: Grande y Felicísima Armada or Armada Invencible, literally “Great and Most Fortunate Navy” or “Invincible Fleet”) was a Spanish fleet of 130 ships that sailed from A Coruña in August 1588 under the command of the Duke of Medina Sidonia with the purpose of escorting an army from Flanders to invade England. The strategic aim was to overthrow Queen Elizabeth I of England and the Tudor establishment of Protestantism in England, with the expectation that this would put a stop to English interference in the Spanish Netherlands and to the harm caused to Spanish interests by English and Dutch privateering.
The Armada chose not to attack the English fleet at Plymouth, then failed to establish a temporary anchorage in the Solent, after one Spanish ship had been captured by Francis Drake in the English Channel, and finally dropped anchor off Calais. While awaiting communications from the Duke of Parma‘s army the Armada was scattered by an English fireship attack. In the ensuing Battle of Gravelines the Spanish fleet was damaged and forced to abandon its rendezvous with Parma’s army, who were blockaded in harbour by Dutch flyboats. The Armada managed to regroup and, driven by southwest winds, withdrew north, with the English fleet harrying it up the east coast of England. The commander ordered a return to Spain, but the Armada was disrupted during severe storms in the North Atlantic and a large portion of the vessels were wrecked on the coasts of Scotland and Ireland. Of the initial 130 ships over a third failed to return. […] The expedition was the largest engagement of the undeclared Anglo-Spanish War (1585–1604). The following year England organised a similar large-scale campaign against Spain, the Drake-Norris Expedition, also known as the Counter-Armada of 1589, which was also unsuccessful. […]
The fleet was composed of 130 ships, 8,000 sailors and 18,000 soldiers, and bore 1,500 brass guns and 1,000 iron guns. […] In the Spanish Netherlands 30,000 soldiers awaited the arrival of the armada, the plan being to use the cover of the warships to convey the army on barges to a place near London. All told, 55,000 men were to have been mustered, a huge army for that time. […] The English fleet outnumbered the Spanish, with 200 ships to 130, while the Spanish fleet outgunned the English—its available firepower was 50% more than that of the English. The English fleet consisted of the 34 ships of the royal fleet (21 of which were galleons of 200 to 400 tons), and 163 other ships, 30 of which were of 200 to 400 tons and carried up to 42 guns each; 12 of these were privateers owned by Lord Howard of Effingham, Sir John Hawkins and Sir Francis Drake. […] The Armada was delayed by bad weather […], and was not sighted in England until 19 July, when it appeared off The Lizard in Cornwall. The news was conveyed to London by a system of beacons that had been constructed all the way along the south coast.”
“During all the engagements, the Spanish heavy guns could not easily be run in for reloading because of their close spacing and the quantities of supplies stowed between decks […] Instead the gunners fired once and then jumped to the rigging to attend to their main task as marines ready to board enemy ships, as had been the practice in naval warfare at the time. In fact, evidence from Armada wrecks in Ireland shows that much of the fleet’s ammunition was never spent. Their determination to fight by boarding, rather than cannon fire at a distance, proved a weakness for the Spanish; it had been effective on occasions such as the battles of Lepanto and Ponta Delgada (1582), but the English were aware of this strength and sought to avoid it by keeping their distance. With its superior manoeuvrability, the English fleet provoked Spanish fire while staying out of range. The English then closed, firing repeated and damaging broadsides into the enemy ships. This also enabled them to maintain a position to windward so that the heeling Armada hulls were exposed to damage below the water line. Many of the gunners were killed or wounded, and the task of manning the cannon often fell to the regular foot soldiers on board, who did not know how to operate the guns. The ships were close enough for sailors on the upper decks of the English and Spanish ships to exchange musket fire. […] The outcome seemed to vindicate the English strategy and resulted in a revolution in naval battle tactics with the promotion of gunnery, which until then had played a supporting role to the tasks of ramming and boarding.”
“In September 1588 the Armada sailed around Scotland and Ireland into the North Atlantic. The ships were beginning to show wear from the long voyage, and some were kept together by having their hulls bundled up with cables. Supplies of food and water ran short. The intention would have been to keep well to the west of the coast of Scotland and Ireland, in the relative safety of the open sea. However, there being at that time no way of accurately measuring longitude, the Spanish were not aware that the Gulf Stream was carrying them north and east as they tried to move west, and they eventually turned south much further to the east than planned, a devastating navigational error. Off the coasts of Scotland and Ireland the fleet ran into a series of powerful westerly winds […] Because so many anchors had been abandoned during the escape from the English fireships off Calais, many of the ships were incapable of securing shelter as they reached the coast of Ireland and were driven onto the rocks. Local men looted the ships. […] more ships and sailors were lost to cold and stormy weather than in direct combat. […] Following the gales it is reckoned that 5,000 men died, by drowning, starvation and slaughter at the hands of English forces after they were driven ashore in Ireland; only half of the Spanish Armada fleet returned home to Spain. Reports of the passage around Ireland abound with strange accounts of hardship and survival.
In the end, 67 ships and fewer than 10,000 men survived. Many of the men were near death from disease, as the conditions were very cramped and most of the ships ran out of food and water. Many more died in Spain, or on hospital ships in Spanish harbours, from diseases contracted during the voyage.”
“Viral hemorrhagic septicemia (VHS) is a deadly infectious fish disease caused by the Viral hemorrhagic septicemia virus (VHSV, or VHSv). It afflicts over 50 species of freshwater and marine fish in several parts of the northern hemisphere. VHS is caused by the viral hemorrhagic septicemia virus (VHSV), different strains of which occur in different regions, and affect different species. There are no signs that the disease affects human health. VHS is also known as “Egtved disease,” and VHSV as “Egtved virus.”
Historically, VHS was associated mostly with freshwater salmonids in western Europe, documented as a pathogenic disease among cultured salmonids since the 1950s. Today it is still a major concern for many fish farms in Europe and is therefore being watched closely by the European Community Reference Laboratory for Fish Diseases. It was first discovered in the US in 1988 among salmon returning from the Pacific in Washington State. This North American genotype was identified as a distinct, more marine-stable strain than the European genotype. VHS has since been found afflicting marine fish in the northeastern Pacific Ocean, the North Sea, and the Baltic Sea. Since 2005, massive die-offs have occurred among a wide variety of freshwater species in the Great Lakes region of North America.”
The article isn’t that great but I figured I should include it anyway because I find it sort of fascinating how almost all humans alive can and do live their entire lives without necessarily ever knowing anything about stuff like this. Humans have some really obvious blind spots when it comes to knowledge about some of the stuff we put into our mouths on a regular basis.
v. Bird migration.
“Bird migration is the regular seasonal movement, often north and south along a flyway between breeding and wintering grounds, undertaken by many species of birds. Migration, which carries high costs in predation and mortality, including from hunting by humans, is driven primarily by availability of food. Migration occurs mainly in the Northern Hemisphere where birds are funnelled on to specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea.”
“Historically, migration has been recorded as much as 3,000 years ago by Ancient Greek authors including Homer and Aristotle […] Aristotle noted that cranes traveled from the steppes of Scythia to marshes at the headwaters of the Nile. […] Aristotle however suggested that swallows and other birds hibernated. […] It was not until the end of the eighteenth century that migration as an explanation for the winter disappearance of birds from northern climes was accepted […] [and Aristotle’s hibernation] belief persisted as late as 1878, when Elliott Coues listed the titles of no less than 182 papers dealing with the hibernation of swallows.”
“Approximately 1800 of the world’s 10,000 bird species are long-distance migrants. […] Within a species not all populations may be migratory; this is known as “partial migration”. Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of passerine species are partially migratory. In some species, the population at higher latitudes tends to be migratory and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration. Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower latitudes), and many show the alternative, chain migration, where populations ‘slide’ more evenly North and South without reversing order.
Within a population, it is common for different ages and/or sexes to have different patterns of timing and distance. […] Many, if not most, birds migrate in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V-formation may conserve 12–20% of the energy they would need to fly alone. […] Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern is seen in landbirds. However most bird migration is in the range of 150 m (500 ft) to 600 m (2000 ft). Bird strike aviation records from the United States show most collisions occur below 600 m (2000 ft) and almost none above 1800 m (6000 ft). Bird migration is not limited to birds that can fly. Most species of penguin migrate by swimming.”
“Some Bar-tailed Godwits have the longest known non-stop flight of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas. Prior to migration, 55 percent of their bodyweight is stored fat to fuel this uninterrupted journey. […] The Arctic Tern has the longest-distance migration of any bird, and sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas. One Arctic Tern, ringed (banded) as a chick on the Farne Islands off the British east coast, reached Melbourne, Australia in just three months from fledging, a sea journey of over 22,000 km (14,000 mi). […] The most pelagic species, mainly in the ‘tubenose’ order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as they ride the “roaring forties” outside the breeding season. The tubenoses spread widely over large areas of open ocean, but congregate when food becomes available. Many are also among the longest-distance migrants; Sooty Shearwaters nesting on the Falkland Islands migrate 14,000 km (8,700 mi) between the breeding colony and the North Atlantic Ocean off Norway. Some Manx Shearwaters do this same journey in reverse. As they are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx Shearwater is calculated to have flown 8 million km (5 million miles) during its over-50 year lifespan.”
“Bird migration is primarily, but not entirely, a Northern Hemisphere phenomenon. This is because land birds in high northern latitudes, where food becomes scarce in winter, leave for areas further south (including the Southern Hemisphere) to overwinter, and because the continental landmass is much larger in the Northern Hemisphere [see also this post]. In contrast, among (pelagic) seabirds, species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in the Southern Hemisphere, and more islands suitable for seabirds to nest.”
I was not super impressed with the coverage in part 3, although there was a lot of interesting stuff as well. However the level of coverage and amount of detail included is high in part four and five. There were a lot of details which evaded me in some of the recent chapters, but I also learned a great deal. There’s quite a lot of coverage of various ‘related topics’ (microbiology, biochemistry, immunology, oncology) in the parts of the book I’ve read recently, and like many other medical texts this book will help you realize that many things you in your mind had thought of as unrelated actually are connected in various interesting ways. It’s worth noting that given how many aspects of these things the book covers (again, 2000+ pages…) you actually get to know a lot of stuff about a lot of other things besides just ‘classic STDs’. It turns out that in Jamaica and Trinidad, over 70% of all lymphoid malignancies are attributable to exposure to a specific herpes virus most people probably haven’t heard about, HTLV-1 (prevalence is also high in other parts of the world, e.g. southern Japan). I didn’t expect to learn this from a book about sexually transmitted diseases, but there we are.
I hope that I’ve picked out stuff from this part of the coverage which is also intelligible to people who didn’t read the 95+% of those chapters I didn’t quote (I always like feedback on such aspects).
“At the simplest level, infection of a cell by a virus or bacterium may lead to cell death. In the case of viruses, specific disease syndromes may be caused by destruction of certain subsets of cells that express essential differentiated functions. A classic example of this is the development of the AIDS following HIV-1 mediated depletion of the CD4 lymphocyte population. Virus-induced cell death may result from one or more specific mechanisms. Many viruses express specific proteins that have as their major function the induction of a blockade in normal host cell metabolism (cellular translation and transcription) such that the metabolic machinery of the cell is subverted preferentially to viral replication. For obvious reasons, the expression of such proteins is usually highly toxic to the cell. Cellular destruction or “direct cytopathic effect” is considered responsible for the disease manifestations of many lytic viruses, including, for example, HSV and poliovirus. On the other hand, many cells may respond to the presence of an invading virus by the induction of apoptosis and the initiation of programmed cell death. Some viruses appear to have evolved mechanisms to prevent or delay apoptosis, thus potentially prolonging productive infection and maximizing replication. For example, HSV-1 infection induces apoptosis at multiple metabolic checkpoints but has also evolved mechanisms to block apoptosis at each point.28 Importantly, the inhibition of apoptosis by HSV-1 also prevents apoptosis induced by virus-specific cytotoxic T lymphocytes, thereby conferring on the infected cell a certain measure of resistance to the host’s cell-mediated immune responses.29
However, many viruses are not intrinsically cytopathic. HBV is a prime example, as many infected HBsAg carriers are asymptomatic and without overt evidence of active liver disease. Despite this, such carriers may be very infectious […] The presence or absence of liver disease is largely determined by the T-cell response to the virus.30 Thus, chronic hepatitis B results from a relatively vigorous but unsuccessful attempt on the part of the host to eliminate the infection. […] chronic liver inflammation and the occurrence of hepatocellular carcinoma reflect the immune response to the virus, rather than specific virus effects. Similar indirect mechanisms may contribute to the progressive immune destruction of infected CD4-positive lymphocytes in patients with HIV-1 infection.
Some bacterial disease processes may also be caused largely to immunopathologic responses. For instance, there is substantial evidence that complications of genital chlamydia infections (salpingitis, Reiter’s syndrome) are correlated with and may be owing to stimulation of antibodies against a heatshock protein (hsp60).33,34 […] In contrast, gonococcal tissue damage appears to be caused by the direct toxic effects of lipid A and peptidoglycan fragments”
“Some viruses are capable of altering differentiated cellular functions, resulting in the production of disease by mechanisms that do not exist among bacteria. A prime example is the altered cellular growth that follows infections by molluscum contagiosum virus (MCV) […]. A more extreme example is the proliferation of epithelial cells that is induced by infection with HPVs. HPV-related epithelial malignancies and cellular transformation are related to the expression of two specific HPV proteins, the E6 and E7 oncoproteins, by high-risk HPV subtypes.22 These proteins interact with p53 and pRb, both promoting cellular proliferation and cell survival. Oncogenic transformation is usually associated with high-level expression of E7 from integrated HPV DNA. The Kaposi’s sarcoma-associated herpes virus (KSHV) also expresses a number of proteins that mimic important host regulators of cellular proliferation and survival […] Expression of these proteins may result in deregulation of cell growth, with changes in the cellular morphology and/or acquisition of the ability of the cells to form colonies in soft agar, changes that are indicative of transformation.
On the other hand, hepatocellular cancers occurring in the context of chronic viral hepatitis are likely to have an alternative explanation. Although it is possible that integration of HBV DNA may be responsible for altered cellular growth control in some hepatitis B-associated cases, liver cancer in this setting may be primarily immunopathogenic.30,32 Chronic inflammation accompanied by oxidative stress and cellular DNA damage are likely to pla[y] important roles.”
“The human immunodeficiency viruses (HIV-1 and HIV-2) and the simian immunodeficiency viruses (SIV) (with a subscript indicating the species of origin) are members of the lentivirus genus of the Retroviridae family, commonly called retroviruses. […] Retroviruses are divided into two subfamilies: Orthoretrovirinae and Spumaretrovirinae […] The spumaretroviruses have distinctive features of their replication cycle that require this more distant classification. They have been isolated from primates, but not humans, and are not associated with any known disease. The orthoretroviruses are divided into six genera and represent viruses that infect snakes, fish, birds, and mammals. […] Human infections occur with viruses from two of these genera. The Deltaretrovirus genus includes human T-cell leukemia virus type I (HTLV-I), the causative agent of adult T-cell leukemia,5, 6, 7 and human T-cell leukemia virus type II (HTLV-II), which is not known to be associated with any disease syndrome. HTLV-I is also associated with another syndrome called HTLV-associated myelopathy (HAM). HTLV-I and HTLV-II are related to viruses found in primates and more distantly related to bovine leukemia virus. The lentivirus genus includes HIV-18 and HIV-29 as well as viruses found in a variety of mammals ranging from primates to sheep. Viruses within these different genera vary widely in the diseases they cause and the mechanisms of disease induction, in contrast to the many common features of their replication cycle. […] In its DNA form the viral genome is inserted into the host genome […]. This step in the virus life cycle has important implications for several features of virus-host interactions. For example, viral DNA that integrates into the genome of a cell but is not expressed becomes silently carried in the descendents of that cell. When this happens in a germline cell, or in the cell of an early embryo that becomes a germline cell, this copy of viral DNA becomes a linked physical part of the host genome, is present in every cell in the body, and is passed on to subsequent generations. Such a genetic element is called an endogenous retrovirus. Most of the elements that become fixed are defective, as there is probably a strong selective pressure against elements that can activate to produce infectious virus. Thus, they represent an archive within the host genome of previous waves of retroviral infections. In fact, the human genome carries a record of retroviral infections over the last 40 million years of primate evolution. These are viruses that we do not recognize as active in the human population at present but are represented by 110,000 genomic inserts of gammaretroviruses, 10,000 inserts of betaretroviruses, and 80,000 inserts of a genus that may be distantly related to spumaretroviruses or may represent an uncharacterized lineage.10 Most of these elements contain large deletions; however, if these deletions had been retained, our genomes would be 40% endogenous retroviruses by mass and outnumber our normal genes 7 to 1.”
“Most histories of retroviruses start with the dramatic discovery by Peyton Rous in 1911 that a virus, Rous sarcoma virus (RSV), could cause cancer. […] The isolation of other tumor-causing retroviruses followed and in time it became apparent that there were two broad classes of agents: one class of viruses caused cancer after a long latency period […], while the other class caused tumors that appeared rapidly […]. We now know that the acutely transforming retroviruses carry a cell-derived oncogene that is responsible for the transforming activity,14 while the slowly transforming retroviruses act by the chance integration of viral DNA near these cellular oncogenes in the host genome to induce their expression and promote tumor formation.15,16 Importantly, many of these same genes can be mutated or overexpressed in human cancers, and the proteins they encode are now the targets of new generations of specific antitumor therapies […] One can confidently surmise that the remnants of the beta- and gammaretroviruses littered in our genomes had such oncogenic effects when they were active. Ironically, for the active human retroviruses, HTLV-I causes tumors by a different but still poorly understood mechanism, and HIV is involved in tumor formation only indirectly through immune suppression. […] There are two fundamental differences between lentiviruses and most other retroviruses: Lentiviruses do not cause cancer [directly…] and they establish chronic infections that result in a long incubation period followed by a chronic symptomatic disease. The “slow” (lenti is Latin for slow), chronic nature of these viral infections was first appreciated for a disease of sheep called maedi-visna (maedi = labored breathing, visna = paralysis and wasting).”
“Using the current sequence diversity in the HIV-1 population, the 1959 sequence, and estimates of the rate of sequence change per year, it has been possible to suggest that the cross-species transmission event that gave rise to the M group of HIV-1 occurred early in the twentieth century.38 If we accept that SIVcpz [HIV in chimps…] has entered the human population three times in the last century (the three groups N,O, and M), then it follows that this virus likely has been transmitted to humans any number of times over the last 10,000 years. Only in the last century the human institutions of large cities and efficient transportation corridors have given these transmission events access to a human environment that could support an epidemic.”
“Over 100 herpesviruses have been identified, with at least eight infecting humans [I had no idea there were that many of them, and I had no clue some of the ones mentioned were actually herpes viruses…]. All human herpesviruses are well adapted to their natural host, being endemic in all human populations studied and carried by a significant fraction of persons in each population. The human herpesviruses include herpes simplex viruses types 1 and 2 (HSV-1 and HSV-2), varicella-zoster virus (VZV), Epstein-Barr virus (EBV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human herpesvirus 7 (HHV-7), and human herpesvirus 8 (HHV-8) or Kaposi’s sarcoma (KS)-associated herpesvirus. Disease caused by human herpesviruses tends to be relatively mild and self-limited in immunocompetent persons, although severe and quite unusual disease can be seen with immunosuppression. […] all herpesviruses share biologic traits. These include expression of a large number of viral enzymes, assembly of the nucleocapsid in the cell nucleus, cytopathic effects on the cell during productive infection, and ability to establish latent infections in an infected host.”
“Vaccine development poses great challenges in the case of herpesviruses because recovery from natural disease is not associated with elimination of virus and does not always protect against another episode of disease.
Live-attenuated, killed, and recombinant subunit herpesvirus vaccines have all been studied. Whole-virus vaccines have the advantage of exposing the immune system to all viral antigens. Live-attenuated vaccines have tended to produce longer-lasting immunity than killed preparations. However, live-attenuated herpesvirus vaccines may be capable of establishing latent infections. The risks are not clear and there is concern that vaccine recipients who subsequently become immunosuppressed may develop disease caused by reactivated virus. Two avirulent HSV strains have been shown to generate lethal recombinants in mice.127 Thus, recombination between an attenuated vaccine strain and a superinfecting wild-type strain could occur. Because several herpesviruses have been associated with malignancies in humans, the long-term safety of any live-attenuated vaccine needs careful study.”
“In the most recent data from NHANES, the prevalence of HSV-1 appears to have fallen slightly from 62% in the years 1988-1994 to 57.7% in the years 1999-2004 in the general population.30 In Western Europe, the prevalence of HSV-1 infection in young adults remains 10-20% higher than that in the United States.31 In STD clinics in the United States, about 60% of attendees have HSV-1 antibodies. In Asia and Africa, HSV-1 infection remains almost universal […] The cumulative lifetime incidence of HSV-2 reaches 25% in white women, 20% in white men, 80% in African American women and 60% in African American men […] Transmission of HSV between sexual partners has been addressed most often in prospective studies of serologically discordant couples, i.e., in couples in whom one partner has and the other does not have HSV-2. Longitudinal studies of such couples have shown that the transmission rate varies from 3% to 12% per year. […] Unlike other STDs, persons usually acquire genital HSV-1 and genital HSV-2 in the context of a steady rather than casual relationship.91 Women have higher rates of acquisition than men; in one study the attack rate among seronegative women approached 30% per year.88 […] Subclinical or asymptomatic viral shedding is an important aspect of the clinical and epidemiologic understanding of genital herpes, as most episodes of sexual and vertical transmission appear to occur during such shedding. […] the risk of HSV transmission is likely similar regardless of the presence of lesions, supporting the epidemiologic observation that most HSV is acquired from asymptomatic partners. […] Subclinical HSV reactivation is highest in the first year after acquisition of infection. During this time period, HSV can be detected from genital sites by PCR on a mean of 25-30% of days […]. This is about 1.5 times higher than patients sampled later in their disease course.”
“The major morbidity of recurrent genital herpes is its frequent reactivation rate. Most likely, all HSV-2 seropositive persons reactivate HSV-2 in the genital region. Moreover, because of the extensive area enervated by the sacral nerve root ganglia, reactivation of HSV-2 is widespread over a large anatomic area.
A prospective study of 457 patients with documented first-episode genital herpes infection has shown that 90% of patients with genital HSV-2 developed recurrences in the first 12 months of infection.93 The median recurrence rate was 0.33 recurrences/month. Most patients experienced multiple clinical reactivations. After primary HSV-2 infection, 38% of patients had at least 6 recurrences and 20% had more than 10 recurrences in the first year of infection. Men had slightly more frequent recurrences than women, median 5 per year compared with 4 recurrences per year [it’s important to note that the recurrence rate is substantial even in patients on suppressive therapy: “About 25% of persons on suppressive therapy will develop a breakthrough recurrence each 3-month period”] […] Recently, long-term cohort studies indicate that the frequency of symptomatic recurrences gradually decreases over time. In the initial years of infection, reported recurrence rate decreases by a median of 1 recurrence per year. […] subclinical shedding episodes account for one-third to one-half of the total episodes of HSV reactivation as measured by viral isolation and for 50-75% of reactivations as measured by PCR. […] Rather than regarding HSV-2 as a predominantly silent infection with occasional clinical outbreaks with marked viral shedding, HSV is a dynamic infection, with very frequent reactivation, mostly subclinical, and active effort on the part of the immune system of the host is required to control mucosal viral replication. […] Immunocompromised patients have frequent and prolonged mucocutaneous HSV infections.226, 227, 228 Over 70% of renal and bone marrow transplant recipients who have serologic evidence of HSV infection reactivate HSV infection clinically within the first month after transplantation […] Recurrent genital herpes in immunosuppressed patients often results in the development of large numbers of vesicles which coalesce into extensive deep, often necrotic, ulcerative lesions.228 […] about 70% of HIV-infected persons in the developed world and 95% in the developing world have HSV-2 antibody. […] The epidemiologic interactions between HIV and HSV-2 have led to calculation of potential population-level impact of these intersecting epidemics. […] The population attributable risk will depend on the prevalence of HSV-2 in the population at risk; at 50% HSV-2 prevalence, common among MSM [Males who have Sex with Males, US], or African Americans in the United States, or general population in sub-Saharan Africa, 35% of HIV infections will be attributable to HSV-2. […] the risk of transmitting HSV [from the mother] to the neonate is 30-50% in women with newly acquired HSV [during the last part of the pregnancy] versus <1% in women with established infection.” [This is relevant not only because herpes sucks, but also because it sucks even more when a newborn child gets it].
“More than 50% of individuals in most populations throughout the world demonstrate serological evidence of prior CMV infection.6 The coevolution with and adaptation to its human host over millions of years may account for the observation that in most cases, CMV infection causes few if any symptoms.5 However, in immunocompromised individuals, primary infection or reactivation of latent virus can be life-threatening. As well, congenital infections are common and can result in serious lifelong sequelae. […] Although CMV does not typically come to medical attention as a result of genital tract lesions or disease, it can be transmitted sexually and has important consequences for the sexually active, child-bearing population. […] As with many viruses that cause chronic infection, CMV seems to have coevolved with humans to a balanced state in which the virus persists but generally causes little clinical illness. The host’s innate and adaptive immune responses are usually successful at limiting CMV infection as is evident by the clear association of immune system dysfunction with CMV disease. In the absence of prophylactic antiviral treatment, CMV often reactivates in seropositive individuals who undergo hematopoietic stem cell transplantation (HSCT).41 Immunosuppression resulting from drugs used to treat cancer and autoimmune disorders, and from impaired T-cell function that occurs with advanced AIDS, is also associated with reactivation of CMV. […] The development of primary CMV infection has been noted in up to 79% of liver transplants and 58% of kidney or heart transplants in which the donor is seropositive and the recipient is seronegative.134,135 In the setting of HSCT, several studies have documented that CMV seropositivity of the recipient results in significantly increased overall posttransplant mortality compared to CMV seronegative recipients with a seronegative donor.136 When the recipient is CMV seronegative, overall mortality is increased when the donor is seropositive compared to the situation where the donor is seronegative.137 […] the transplant recipient is at particularly high risk of CMV reactivation during periods of potent immunosuppression that accompany graft rejection or graft-versus-host disease.”
I read the first nine chapters of this very long book a while back, and I decided to have another go at it. I have now read chapters 10-18, the first seven of which deal with ‘Profiles of Vulnerable Populations’ (including chapters about: Gender and Sexually Transmitted Diseases (10), Adolescents and STDs Including HIV Infection (11), Female Sex Workers and Their Clients in the Epidemiology and Control of Sexually Transmitted Diseases (12), Homosexual and Bisexual Behavior in Men in Relation to STDs and HIV Infection (13), Lesbian Sexual Behavior in Relation to STDs and HIV Infection (14) (some surprising stuff in that chapter, but I won’t cover that here), HIV and Other Sexually Transmitted Infections in Injection Drug Users and Crack Cocaine Smokers (15), and STDs, HIV/AIDS, and Migrant Populations (16)), and the last two of which deal with ‘Host Immunity and Molecular Pathogenesis and STD’ (Chapters about: ‘Genitourinary Immune Defense’ (17) and ‘Normal Genital Flora’ (19 as well as ‘Pathogenesis of Sexually Transmitted Viral and Bacterial Infections’ (19) – I have only read the first two chapters in that section so far, and so I won’t cover the last chapter here. I also won’t cover the content of the first of these chapters, but for different reasons). The book has 108 chapters and more than 2000 pages, so although I’ve started reading the book again I’m sure I won’t finish the book this time either. My interest in the things covered in this book is purely academical in the first place.
Some observations and comments below…
“A major problem when assessing the risk of men and women of contracting an STI [sexually transmitted infection], is the differential reporting of sexual behavior between men and women. It is believed that women tend to underreport sexual activity, whereas men tend to over-report. This has been highlighted by studies assessing changes in reported age at first sexual intercourse between successive birth cohorts15 and by studies that compared the numbers of sex partners reported by men and by women.10,13,16, 17, 18 […] There is widespread agreement that women are more frequently and severely affected by STIs than men. […] In the studies in the general population that have assessed the prevalence of gonorrhea, chlamydial infection, and active syphilis, the prevalence was generally higher in women than in men […], with differences in prevalence being more marked in the younger age groups. […] HIV infection is also strikingly more prevalent in women than in men in most populations where the predominant mode of transmission is heterosexual intercourse and where the HIV epidemic is mature […] It is generally accepted that the male-to-female transmission of STI pathogens is more efficient than female-to-male transmission. […] The high vulnerability to STIs of young women compared to young men is [however] the result of an interplay between psychological, sociocultural, and biological factors.33”
“Complications of curable STIs, i.e., STIs caused by bacteria or protozoa, can be avoided if infected persons promptly seek care and are managed appropriately. However, a prerequisite to seeking care is that infected persons are aware that they are infected and that they seek treatment. A high proportion of men and of women infected with N. gonorrhoeae, C. trachomatis, or T. vaginalis, however, never experience symptoms. Women are asymptomatic more often than men. It has been estimated that 55% of episodes of gonorrhea in men and 86% of episodes in women remain asymptomatic; 89% of men with chlamydial infection remain asymptomatic and 94% of women.66 For chlamydial infection, it has been well documented that serious complications, including infertility due to tubal occlusion, can occur in the absence of a history of symptoms of pelvic inflammatory disease.65”
“Most population-based STD rates underestimate risk for sexually active adolescents because the rate is inappropriately expressed as cases of disease divided by the number of individuals in this age group. Yet only those who have had intercourse are truly at risk for STDs. For rates to reflect risk among those who are sexually experienced, appropriate denominators should include only the number of individuals in the demographic group who have had sexual intercourse. […] In general, when rates are corrected for those who are sexually active, the youngest adolescents have the highest STD rates of any age group.5”
“Although risk of HPV acquisition increases with number of partners,67,74,75 prevalence of infection is substantial even with limited sexual exposure. Numerous clinic-based studies,76,77 supported by population-based data, indicate that HPV prevalence typically exceeds 10% among young women with only one or two partners.71”
“while 100 years ago young men in the United States spent approximately 7 years between [sexual] maturation and marriage, more recently the interval was 13 years, and increasing; for young women, the interval between menarche and marriage has increased from 8 years to 14. […] In 1970, only 5% of women in United States had had premarital intercourse by age 15, whereas in 1988, 26% had engaged in intercourse by this age. However, in 1988, 37% of never married 15-17-year-olds had engaged in intercourse but in 2002, only 30% had. Comparable data from males demonstrated even greater declines — 50% of never married 15-17-year-olds reported having had intercourse in 1988, compared with only 31% in 200299”
“Infection with herpes simplex type 2 (HSV-2) is extremely common among FSWs [female sex workers], and because HSV-2 infection increases the likelihood of both HIV acquisition in HIV-uninfected individuals, and HIV transmission in HIV-infected individuals, HSV-2 infection plays a key role in HIV transmission dynamics.100 Studies of FSWs in Kenya,67 South Africa,101 Tanzania,36 and Mexico72 have found HSV-2 prevalences ranging from 70% to over 80%. In a prospective study of HIV seronegative FSWs in Nairobi, Kenya, 72.7% were HSV-2 seropositive at baseline.67 Over the course of over two years of observation […] HSV-2 seropositive FSWs were over six times more likely to acquire HIV infection than women who were HSV-2 seronegative.”
“Surveys in the UK133 and New Zealand134 found that approximately 7% of men reported ever paying for sex. A more recent telephone survey in Australia found that almost 16% of men reported having ever paid for sex, with 1.9% reporting that they had paid for sex in the past 12 months.135 Two national surveys in Britain found that the proportion of men who reported paying women for sex in the previous 5 years increased from 2.0% in 1990 to 4.2% in 2000.14 A recent review article summarizing the findings of various surveys in different global regions found that the median proportion of men who reported “exchanging gifts or money for sex” in the past 12 months was approximately 9-10%, whereas the proportion of men reporting who engaged in “paid sex” or sex with a sex worker was 2-3%.136”
“There are currently around 175-200 million people documented as living outside their countries of birth.3 This number includes both voluntary migrants, people who have chosen to leave their country of origin, and forced migrants, including refugees, trafficked people, and internally displaced people.4 […] Each year about 700 million people travel internationally with an estimated 50 million originating in developed countries traveling to developing ones.98 […] Throughout history, infectious diseases of humans have followed population movements. The great drivers of population mobility including migration, economic changes, social change, war, and travel have been associated with disease acquisition and spread at individual and population levels. There have been particularly strong associations of these key modes of population mobility and mixing for sexually transmitted diseases (STDs), including HIV/AIDS. […] Epidemiologists elucidated early in the HIV/AIDS epidemic that there was substantial geographic variability in incidence, as well as different risk factors for disease spread. As researchers better understood the characteristics of HIV transmission, its long incubation time, relatively low infectivity, and chronic disease course, it became clear that mobility of infected persons was a key determinant for further spread to new populations.6 […] mobile populations are more likely to exhibit high-risk behaviors”
“Studies conducted over the past decade have relied on molecular techniques to identify previously noncultivable organisms in the vagina of women with “normal” and “abnormal” flora. […] These studies have confirmed that the microflora of some women is predominated by species belonging to the genus Lactobacillus, while women having BV [bacterial vaginosis] have a broad range of aerobic and anaerobic microorganisms. It has become increasingly clear that even with these more advanced tools to characterize the microbial ecology of the vagina the full range of microorganisms present has yet to be fully described. […] the frequency and concentration of many facultative organisms depends upon whether the woman has BV or Lactobacillus-predominant microflora.36 However, even if “normal” vaginal microflora is restricted to those women having a Lactobacillus-dominant flora as defined by Gram stain, 46% of women are colonized by G. vaginalis, 78% are colonized by Ureaplasma urealyticum, and 31% are colonized by Candida albicans.36 […] Nearly all women are vaginally colonized by obligately anaerobic gram-negative rods and cocci,36 and several species of anaerobic bacteria, which are not yet named, are also present. While some species of anaerobes are present at higher frequencies or concentrations among women with BV, it is clear that the microbial flora is complex and cannot be defined simply by the presence or absence of lactobacilli, Gardnerella, mycoplasmas, and anaerobes. This observation has been confirmed with molecular characterization of the microflora.26, 27, 28, 29, 30, 31, 32, 33, 34, 35”
Vaginal pH, which is in some sense an indicator of vaginal health, varies over the lifespan (I did not know this..): In premenarchal girls vaginal pH is around 7, whereas it drops to 4.0-4.5 in healthy women of reproductive age. It increases again in post-menopausal women, but postmenopausal women receiving hormone replacement therapy have lower average vaginal pH and higher numbers of lactobacilli in their vaginal floras than do postmenopausal women not receiving hormone replacement therapy, one of several findings indicating that vaginal pH is under hormonal control (estrogen is important). Lactobacilli play an important role because those things produce lactic acid which lowers pH, and women with a reduced number of lactobacilli in their vaginal floras have higher vaginal pH. Stuff like sexual intercourse, menses, and breastfeeding all affect vaginal pH and -microflora, as does antibiotic usage, and such things may play a role in disease susceptibility. Aside from lowering pH some species of Lactobacilli also play other helpful roles which are likely to be important in terms of disease susceptibility, such as producing hydrogen peroxide in their microenvironments, which is the kind of stuff a lot of (other) bacteria really don’t like to be around: “Several clinical studies conducted in populations of pregnant and nonpregnant women in the United States and Japan have shown that the prevalence of BV is low (4%) among women colonized with H2O2-producing strains of lactobacilli. By comparison, approximately one third of women who are vaginally colonized by Lactobacillus that do not produce H2O2 have BV.45, 46, 47“.
My interest in the things covered in this book is as mentioned purely academical, but I’m well aware that some of the stuff may not be as ‘irrelevant’ to other people reading along here as it is to me. One particularly relevant observation I came across which I thought I should include here is this:
“The lack of reliable plenotypic methods for identification of lactobacilli have led to a broad misunderstanding of the species of lactobacilli present in the vagina, and the common misperception that dairy and food derived lactobacilli are similar to those found in the vagina. […] Acidophilus in various forms have been used to treat yeast vaginitis.144 Some investigators have gone so far as to suggest that ingestion of yogurt containing acidophilus prevents recurrent Candida vaginitis.145 Nevertheless, clinical studies of women with acute recurrent vulvovaginitis have demonstrated that women who have recurrent yeast vaginitis have the same frequency and concentration of Lactobacillus as women without recurrent infections.146 […] many women who seek medical care for chronic vaginal symptoms report using Lactobacillus-containing products orally or vaginally to restore the vaginal microflora in the mistaken belief that this will prevent recurrent vaginitis.147 Well-controlled trials have failed to document any decrease in vaginal candidiasis whether orally or vaginally applied preparations of lactobacilli are used by women.148 Microbial interactions in the vagina probably are much more complex than have been appreciated in the past.”
As illustrated above, there seems to be some things ‘we’ know which ‘people’ (including some doctors..) don’t know. But there are also some really quite relevant things ‘we’ don’t know a lot about yet. One example would be whether/how hygiene products mediate the impact of menses on vaginal flora: “It is unknown whether the use of tampons, which might absorb red blood cells during menses, may minimize the impact of menses on colonization by lactobacilli. However, some observational data suggests that women who routinely use tampons for catamenial protection are more likely to maintain colonization by lactobacilli compared to women who use pads for catamenial protection”. Just to remind you, colonization by lactobacilli is desirable. On a related and more general note: “Many young women use vaginal products including lubricants, contraceptives, antifungals, and douches. Each of these products can alter the vaginal ecosystem by changing vaginal pH, altering the vaginal fluid by direct dilution, or by altering the capacity of organisms to bind to the vaginal epithelium.” There are a lot of variables at play here and my reading of the results indicate that it’s not always obvious what is actually the best advice. For example an in this context large (n=235) prospective study about the effect of N-9, a compound widely used in contraceptives, on vaginal flora “demonstrated that N-9 did have a dose-dependent impact on the prevalence of anaerobic gram-negative rods, and was associated with a twofold increase in BV (OR 2.3, 95% CI 1.1-4.7).” Using spermicides like those may on the one hand perhaps decrease the likelihood of getting pregnant and perhaps lower the risk of contracting a sexually transmitted disease during intercourse, but on the other hand usage of such preparations may also affect the vaginal flora in a way which may make users more vulnerable to sexually transmitted diseases by promoting E. coli colonization of the vaginal flora. On a more general note, “The impact of contraceptives on the vaginal ecosystem, including their impact on susceptibility to infection, has not been adequately investigated to date.” The book does cover various studies on different types of contraceptives, but most of the studies are small and probably underpowered, so I decided not to go into this stuff in more detail. An important point to take away here is however that there’s no doubt that the vaginal flora is important for disease susceptibility: “longitudinal studies [have] showed a consistent link between increased incidence of HIV, HSV-2 and HPV and altered vaginal microflora […] there is a strong interaction between the health of the vaginal ecosystem and susceptibility to viral STIs.” Unfortunately, “use of probiotic products for treatment of BV has met with limited success.”
I should note that although multiple variables and interactions are involved in ‘this part of the equation’, it is of course only part of the bigger picture. One way in which it’s only part of the bigger picture is that the vaginal flora plays other roles besides the one which relates to susceptibility to sexually transmitted disease – one example: “Studies have established that some organisms considered to be part of the normal vaginal microflora are associated with an increased risk of preterm and/or low birth weight delivery when they are present at high-density concentrations in the vaginal fluid”. (And once again the lactobacilli in particular may play a role: “high-density vaginal colonization by Lactobacillus species has been linked with a decreased risk of most adverse outcomes of pregnancy”). Another major way in which this stuff is only part of the equation is that human females have a lot of other ways to defend themselves as well besides relying on bacterial colonists. If you don’t like immunology there are some chapters in here which you’d be well-advised to skip.
I was very conflicted about blogging this book at all, but I figured that given I have blogged all other non-fiction books this year so far I probably ought to at least talk a little bit about this one as well. I wrote this on goodreads:
“The book contained a brief review of some mathematics used in a couple of previous courses I’ve taken, with some new details added to the mix. Having worked with this stuff before is probably a requirement to get anything much out of it, as it is highly technical.”
Here are some observations/comments from the conclusion, providing a brief outline:
“In this book we have studied discrete-time stochastic optimal control problems (OCPs) and dynamic games by means of the Euler equation (EE) approach. […] In Chap. 2 we studied the EE approach to nonstationary OCPs in discrete-time. OCPs are usually solved by dynamic programming and the Lagrange method. The latter techniques for solving OCPs are based on iteration methods or rely on guessing the form of the value or the policy functions […] In contrast, the EE approach does not require an iteration method nor knowledge about the form of the value function; on the contrary, the value function can be computed after the OCP is solved. Following the EE approach, we have to solve a second-order difference equation (possibly nonlinear and/or nonhomogeneous); there are, however, many standard methods to do this. Both the EE […] and the transversality condition (TC) […] are known in the literature. The EE […] is typically deduced from the Bellman equation whereas the necessity of the TC […] is obtained by using approximation or perturbation results. Our main results in Chap. 2 require milder assumptions […] In Theorem 2.1 we obtain the EE (2.14) and the TC (2.15), as necessary conditions for optimality, using Gâteaux differentials. […] Chapter 3 was devoted to an inverse optimal problem in stochastic control. […] Finally, in Chap. 4, some results from Chaps. 2 and 3 were applied to dynamic games. Sufficient conditions to identify MNE [Markov–Nash equilibria] and OLNE [Open-loop Nash equilibria], by following the EE approach, were given […] one of our main objectives was to identify DPGs [Dynamic potential games] by generalizing the procedure of Dechert and O’Donnell for the SLG”
“Some advantages and shortcomings of the EE approach. A first advantage of using the EE to solve discrete-time OCPs is that it is very natural and straightforward, because it is an obvious extension of results on the properties of maxima (or minima) of differentiable functions. Indeed, as shown in Sect. 2.2, using Gâteaux differentials, the EE and some transversality condition are straightforward consequences of the elementary calculus approach. From our present point of view, the main advantage of the EE approach is that it allows us to analyze certain inverse OCPs required to characterize the dynamic potential games we are interested in. It is not clear to us that these inverse OCPs can be analyzed by other methods (e.g.,, dynamic programming or the maximum principle). On the other hand, a possible disadvantage is that the Euler equation might require some “guessing” to obtain a sequence that solves it. This feature, however, is common to other solution techniques such as dynamic programming.”
If none of the above makes much sense to you, I wouldn’t worry too much about it. Stuff like this, this, and this was covered in previous coursework of mine so I was familiar with some of the stuff covered in this book; stuff like this is part of what many economists learn during their education. I figured it’d be interesting to see a more ‘pure-math’ coverage of these things. It turned out, however, that many of the applications in the book are economics-related, so in a way the coverage was ‘less pure’ than I’d thought before I started out.
A couple of links I looked up along the way are these: Gâteaux derivative, Riccati equation, Borel set. I haven’t read this, but a brief google for some of the relevant terms above made that one pop up; it looks as if it may be a good resource if you’re curious to learn more about what this kind of stuff is about.