“Since meta-analysis is a relatively new field, many people, including those who actually use meta-analysis in their work, have not had the opportunity to learn about it systematically. We hope that this volume will provide a framework that allows them to understand the logic of meta-analysis, as well as how to apply and interpret meta-analytic procedures properly.
This book is aimed at researchers, clinicians, and statisticians. Our approach is primarily conceptual. The reader will be able to skip the formulas and still understand, for example, the differences between fixed-effect and random-effects analysis, and the mechanisms used to assess the dispersion in effects from study to study. However, for those with a statistical orientation, we include all the relevant formulas, along with worked examples. [...] This volume is intended for readers from various substantive fields, including medicine, epidemiology, social science, business, ecology, and others. While we have included examples from many of these disciplines, the more important message is that meta-analytic methods that may have developed in any one of these fields have application to all of them.”
I’ve been reading this book and I like it so far – I’ve read about the topic before but I’ve been missing a textbook on this topic, and this one is quite good so far (I’ve read roughly half of it so far). Below I have added some observations from the first thirteen chapters of the book:
“Meta-analysis refers to the statistical synthesis of results from a series of studies. While the statistical procedures used in a meta-analysis can be applied to any set of data, the synthesis will be meaningful only if the studies have been collected systematically. This could be in the context of a systematic review, the process of systematically locating, appraising, and then synthesizing data from a large number of sources. Or, it could be in the context of synthesizing data from a select group of studies, such as those conducted by a pharmaceutical company to assess the efficacy of a new drug. If a treatment effect (or effect size) is consistent across the series of studies, these procedures enable us to report that the effect is robust across the kinds of populations sampled, and also to estimate the magnitude of the effect more precisely than we could with any of the studies alone. If the treatment effect varies across the series of studies, these procedures enable us to report on the range of effects, and may enable us to identify factors associated with the magnitude of the effect size.”
“For systematic reviews, a clear set of rules is used to search for studies, and then to determine which studies will be included in or excluded from the analysis. Since there is an element of subjectivity in setting these criteria, as well as in the conclusions drawn from the meta-analysis, we cannot say that the systematic review is entirely objective. However, because all of the decisions are specified clearly, the mechanisms are transparent. A key element in most systematic reviews is the statistical synthesis of the data, or the meta-analysis. Unlike the narrative review, where reviewers implicitly assign some level of importance to each study, in meta-analysis the weights assigned to each study are based on mathematical criteria that are specified in advance. While the reviewers and readers may still differ on the substantive meaning of the results (as they might for a primary study), the statistical analysis provides a transparent, objective, and replicable framework for this discussion. [...] If the entire review is performed properly, so that the search strategy matches the research question, and yields a reasonably complete and unbiased collection of the relevant studies, then (providing that the included studies are themselves valid) the meta-analysis will also be addressing the intended question. On the other hand, if the search strategy is flawed in concept or execution, or if the studies are providing biased results, then problems exist in the review that the meta-analysis cannot correct.”
“Meta-analyses are conducted for a variety of reasons [...] The purpose of the meta-analysis, or more generally, the purpose of any research synthesis has implications for when it should be performed, what model should be used to analyze the data, what sensitivity analyses should be undertaken, and how the results should be interpreted. Losing sight of the fact that meta-analysis is a tool with multiple applications causes confusion and leads to pointless discussions about what is the right way to perform a research synthesis, when there is no single right way. It all depends on the purpose of the synthesis, and the data that are available.”
“The effect size, a value which reflects the magnitude of the treatment effect or (more generally) the strength of a relationship between two variables, is the unit of currency in a meta-analysis. We compute the effect size for each study, and then work with the effect sizes to assess the consistency of the effect across studies and to compute a summary effect. [...] The summary effect is nothing more than the weighted mean of the individual effects. However, the mechanism used to assign the weights (and therefore the meaning of the summary effect) depends on our assumptions about the distribution of effect sizes from which the studies were sampled. Under the fixed-effect model, we assume that all studies in the analysis share the same true effect size, and the summary effect is our estimate of this common effect size. Under the random-effects model, we assume that the true effect size varies from study to study, and the summary effect is our estimate of the mean of the distribution of effect sizes. [...] A key theme in this volume is the importance of assessing the dispersion of effect sizes from study to study, and then taking this into account when interpreting the data. If the effect size is consistent, then we will usually focus on the summary effect, and note that this effect is robust across the domain of studies included in the analysis. If the effect size varies modestly, then we might still report the summary effect but note that the true effect in any given study could be somewhat lower or higher than this value. If the effect varies substantially from one study to the next, our attention will shift from the summary effect to the dispersion itself.”
“During the time period beginning in1959 and ending in 1988 (a span of nearly 30 years) there were a total of 33 randomized trials performed to assess the ability of streptokinase to prevent death following a heart attack. [...] The trials varied substantially in size. [...] Of the 33 studies, six were statistically significant while the other 27 were not, leading to the perception that the studies yielded conflicting results. [...] In 1992 Lau et al. published a meta-analysis that synthesized the results from the 33 studies. [...] [They found that] the treatment reduces the risk of death by some 21%. And, this effect was reasonably consistent across all studies in the analysis. [...] The narrative review has no mechanism for synthesizing the p-values from the different studies, and must deal with them as discrete pieces of data. In this example six of the studies were statistically significant while the other 27 were not, which led some to conclude that there was evidence against an effect, or that the results were inconsistent [...] By contrast, the meta-analysis allows us to combine the effects and evaluate the statistical significance of the summary effect. The p-value for the summary effect [was] p=0.0000008. [...] While one might assume that 27 studies failed to reach statistical significance because they reported small effects, it is clear [...] that this is not the case. In fact, the treatment effect in many of these studies was actually larger than the treatment effect in the six studies that were statistically significant. Rather, the reason that 82% of the studies were not statistically significant is that these studies had small sample sizes and low statistical power.”
“the [narrative] review will often focus on the question of whether or not the body of evidence allows us to reject the null hypothesis. There is no good mechanism for discussing the magnitude of the effect. By contrast, the meta-analytic approaches discussed in this volume allow us to compute an estimate of the effect size for each study, and these effect sizes fall at the core of the analysis. This is important because the effect size is what we care about. If a clinician or patient needs to make a decision about whether or not to employ a treatment, they want to know if the treatment reduces the risk of death by 5% or 10% or 20%, and this is the information carried by the effect size. [...] The p-value can tell us only that the effect is not zero, and to report simply that the effect is not zero is to miss the point. [...] The narrative review has no good mechanism for assessing the consistency of effects. The narrative review starts with p-values, and because the p-value is driven by the size of a study as well as the effect in that study, the fact that one study reported a p-value of 0.001 and another reported a p-value of 0.50 does not mean that the effect was larger in the former. The p-value of 0.001 could reflect a large effect size but it could also reflect a moderate or small effect in a large study [...] The p-value of 0.50 could reflect a small (or nil) effect size but could also reflect a large effect in a small study [...] This point is often missed in narrative reviews. Often, researchers interpret a nonsignificant result to mean that there is no effect. If some studies are statistically significant while others are not, the reviewers see the results as conflicting. This problem runs through many fields of research. [...] By contrast, meta-analysis completely changes the landscape. First, we work with effect sizes (not p-values) to determine whether or not the effect size is consistent across studies. Additionally, we apply methods based on statistical theory to allow that some (or all) of the observed dispersion is due to random sampling variation rather than differences in the true effect sizes. Then, we apply formulas to partition the variance into random error versus real variance, to quantify the true differences among studies, and to consider the implications of this variance.”
“Consider [...] the case where some studies report a difference in means, which is used to compute a standardized mean difference. Others report a difference in proportions which is used to compute an odds ratio. And others report a correlation. All the studies address the same broad question, and we want to include them in one meta-analysis. [...] we are now dealing with different indices, and we need to convert them to a common index before we can proceed. The question of whether or not it is appropriate to combine effect sizes from studies that used different metrics must be considered on a case by case basis. The key issue is that it only makes sense to compute a summary effect from studies that we judge to be comparable in relevant ways. If we would be comfortable combining these studies if they had used the same metric, then the fact that they used different metrics should not be an impediment. [...] When some studies use means, others use binary data, and others use correlational data, we can apply formulas to convert among effect sizes. [...] When we convert between different measures we make certain assumptions about the nature of the underlying traits or effects. Even if these assumptions do not hold exactly, the decision to use these conversions is often better than the alternative, which is to simply omit the studies that happened to use an alternate metric. This would involve loss of information, and possibly the systematic loss of information, resulting in a biased sample of studies. A sensitivity analysis to compare the meta-analysis results with and without the converted studies would be important. [...] Studies that used different measures may [however] differ from each other in substantive ways, and we need to consider this possibility when deciding if it makes sense to include the various studies in the same analysis.”
“The precision with which we estimate an effect size can be expressed as a standard error or confidence interval [...] or as a variance [...] The precision is driven primarily by the sample size, with larger studies yielding more precise estimates of the effect size. [...] Other factors affecting precision include the study design, with matched groups yielding more precise estimates (as compared with independent groups) and clustered groups yielding less precise estimates. In addition to these general factors, there are unique factors that affect the precision for each effect size index. [...] Studies that yield more precise estimates of the effect size carry more information and are assigned more weight in the meta-analysis.”
“Under the fixed-effect model we assume that all studies in the meta-analysis share a common (true) effect size. [...] However, in many systematic reviews this assumption is implausible. When we decide to incorporate a group of studies in a meta-analysis, we assume that the studies have enough in common that it makes sense to synthesize the information, but there is generally no reason to assume that they are identical in the sense that the true effect size is exactly the same in all the studies. [...] Because studies will differ in the mixes of participants and in the implementations of interventions, among other reasons, there may be different effect sizes underlying different studies. [...] One way to address this variation across studies is to perform a random-effects meta-analysis. In a random-effects meta-analysis we usually assume that the true effects are normally distributed. [...] Since our goal is to estimate the mean of the distribution, we need to take account of two sources of variance. First, there is within-study error in estimating the effect in each study. Second (even if we knew the true mean for each of our studies), there is variation in the true effects across studies. Study weights are assigned with the goal of minimizing both sources of variance.”
“Under the fixed-effect model we assume that the true effect size for all studies is identical, and the only reason the effect size varies between studies is sampling error (error in estimating the effect size). Therefore, when assigning weights to the different studies we can largely ignore the information in the smaller studies since we have better information about the same effect size in the larger studies. By contrast, under the random-effects model the goal is not to estimate one true effect, but to estimate the mean of a distribution of effects. Since each study provides information about a different effect size, we want to be sure that all these effect sizes are represented in the summary estimate. This means that we cannot discount a small study by giving it a very small weight (the way we would in a fixed-effect analysis). The estimate provided by that study may be imprecise, but it is information about an effect that no other study has estimated. By the same logic we cannot give too much weight to a very large study (the way we might in a fixed-effect analysis). [...] Under the fixed-effect model there is a wide range of weights [...] whereas under the random-effects model the weights fall in a relatively narrow range. [...] the relative weights assigned under random effects will be more balanced than those assigned under fixed effects. As we move from fixed effect to random effects, extreme studies will lose influence if they are large, and will gain influence if they are small. [...] Under the fixed-effect model the only source of uncertainty is the within-study (sampling or estimation) error. Under the random-effects model there is this same source of uncertainty plus an additional source (between-studies variance). It follows that the variance, standard error, and confidence interval for the summary effect will always be larger (or wider) under the random-effects model than under the fixed-effect model [...] Under the fixed-effect model the null hypothesis being tested is that there is zero effect in every study. Under the random-effects model the null hypothesis being tested is that the mean effect is zero. Although some may treat these hypotheses as interchangeable, they are in fact different”
“It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. [...] this situation is relatively rare. [...] By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model. [...] There is one caveat to the above. If the number of studies is very small, then the estimate of the between-studies variance [...] will have poor precision. While the random-effects model is still the appropriate model, we lack the information needed to apply it correctly. In this case the reviewer may choose among several options, each of them problematic [and one of which is to apply a fixed effects framework].”
i. “If people spent as much time studying as they spent hating, I’d be writing this from a goddamn moon-base.” (Zach Weiner)
ii. “Experience comprises illusions lost, rather than wisdom gained.” (Joseph Roux)
iii. “The happiness which is lacking makes one think even the happiness one has unbearable.” (-ll-)
iv. “Men are more apt to be mistaken in their generalizations than in their particular observations.” (Niccolo Machiavelli)
v. “A good reputation is more valuable than money.” (Publilius Syrus)
vi. “Many receive advice, few profit by it.” (-ll-)
vii. “Anyone can hold the helm when the sea is calm.” (-ll-)
viii. “To forget the wrongs you receive, is to remedy them.” (-ll-)
ix. “It is a bad plan that admits of no modification.” (-ll-)
x. “No one knows what he can do till he tries.” (-ll-)
xi. “Everything is worth what its purchaser will pay for it.” (-ll-)
xii. “I have often regretted my speech, never my silence.” (-ll-)
xiii. “We give to necessity the praise of virtue.” (Quintilian)
xiv. “Those who wish to appear wise among fools, among the wise seem foolish.” (-ll-)
xv. “Shared danger is the strongest of bonds; it will keep men united in spite of mutual dislike and suspicion.” (Titus Livius Patavinus)
xvi. “Favor and honor sometimes fall more fitly on those who do not desire them.” (-ll-)
xvii. “Men are only too clever at shifting blame from their own shoulders to those of others.” (-ll-)
xviii. “Men are slower to recognise blessings than misfortunes.” (-ll-)
xix. “It is easier to criticize than to correct our past errors.” (-ll-)
xx. “There is an old saying which, from its truth, has become proverbial, that friendships should be immortal, enmities mortal.” (-ll-)
(A minor note: These days when I’m randomly browsing wikipedia and not just looking up concepts or terms found in the books I read, I’m mostly browsing the featured content on wikipedia. There’s a lot of featured stuff, and on average such articles more interesting than random articles. As a result of this approach, all articles covered in the post below are featured articles. A related consequence of this shift may be that I may cover fewer articles in future wikipedia posts than I have in the past; this post only contains five articles, which I believe is slightly less than usual for these posts – a big reason for this being that it sometimes takes a lot of time to read a featured article.)
i. Woolly mammoth.
“The woolly mammoth (Mammuthus primigenius) was a species of mammoth, the common name for the extinct elephant genus Mammuthus. The woolly mammoth was one of the last in a line of mammoth species, beginning with Mammuthus subplanifrons in the early Pliocene. M. primigenius diverged from the steppe mammoth, M. trogontherii, about 200,000 years ago in eastern Asia. Its closest extant relative is the Asian elephant. [...] The earliest known proboscideans, the clade which contains elephants, existed about 55 million years ago around the Tethys Sea. [...] The family Elephantidae existed six million years ago in Africa and includes the modern elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved. [...] The woolly mammoth coexisted with early humans, who used its bones and tusks for making art, tools, and dwellings, and the species was also hunted for food. It disappeared from its mainland range at the end of the Pleistocene 10,000 years ago, most likely through a combination of climate change, consequent disappearance of its habitat, and hunting by humans, though the significance of these factors is disputed. Isolated populations survived on Wrangel Island until 4,000 years ago, and on St. Paul Island until 6,400 years ago.”
“The appearance and behaviour of this species are among the best studied of any prehistoric animal due to the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. [...] Fully grown males reached shoulder heights between 2.7 and 3.4 m (9 and 11 ft) and weighed up to 6 tonnes (6.6 short tons). This is almost as large as extant male African elephants, which commonly reach 3–3.4 m (9.8–11.2 ft), and is less than the size of the earlier mammoth species M. meridionalis and M. trogontherii, and the contemporary M. columbi. [...] Woolly mammoths had several adaptations to the cold, most noticeably the layer of fur covering all parts of the body. Other adaptations to cold weather include ears that are far smaller than those of modern elephants [...] The small ears reduced heat loss and frostbite, and the tail was short for the same reason [...] They had a layer of fat up to 10 cm (3.9 in) thick under the skin, which helped to keep them warm. [...] The coat consisted of an outer layer of long, coarse “guard hair”, which was 30 cm (12 in) on the upper part of the body, up to 90 cm (35 in) in length on the flanks and underside, and 0.5 mm (0.020 in) in diameter, and a denser inner layer of shorter, slightly curly under-wool, up to 8 cm (3.1 in) long and 0.05 mm (0.0020 in) in diameter. The hairs on the upper leg were up to 38 cm (15 in) long, and those of the feet were 15 cm (5.9 in) long, reaching the toes. The hairs on the head were relatively short, but longer on the underside and the sides of the trunk. The tail was extended by coarse hairs up to 60 cm (24 in) long, which were thicker than the guard hairs. It is likely that the woolly mammoth moulted seasonally, and that the heaviest fur was shed during spring.”
“Woolly mammoths had very long tusks, which were more curved than those of modern elephants. The largest known male tusk is 4.2 m (14 ft) long and weighs 91 kg (201 lb), but 2.4–2.7 m (7.9–8.9 ft) and 45 kg (99 lb) was a more typical size. Female tusks averaged at 1.5–1.8 m (4.9–5.9 ft) and weighed 9 kg (20 lb). About a quarter of the length was inside the sockets. The tusks grew spirally in opposite directions from the base and continued in a curve until the tips pointed towards each other. In this way, most of the weight would have been close to the skull, and there would be less torque than with straight tusks. The tusks were usually asymmetrical and showed considerable variation, with some tusks curving down instead of outwards and some being shorter due to breakage.”
“Woolly mammoths needed a varied diet to support their growth, like modern elephants. An adult of six tonnes would need to eat 180 kg (397 lb) daily, and may have foraged as long as twenty hours every day. [...] Woolly mammoths continued growing past adulthood, like other elephants. Unfused limb bones show that males grew until they reached the age of 40, and females grew until they were 25. The frozen calf “Dima” was 90 cm (35 in) tall when it died at the age of 6–12 months. At this age, the second set of molars would be in the process of erupting, and the first set would be worn out at 18 months of age. The third set of molars lasted for ten years, and this process was repeated until the final, sixth set emerged when the animal was 30 years old. A woolly mammoth could probably reach the age of 60, like modern elephants of the same size. By then the last set of molars would be worn out, the animal would be unable to chew and feed, and it would die of starvation.“
“The habitat of the woolly mammoth is known as “mammoth steppe” or “tundra steppe”. This environment stretched across northern Asia, many parts of Europe, and the northern part of North America during the last ice age. It was similar to the grassy steppes of modern Russia, but the flora was more diverse, abundant, and grew faster. Grasses, sedges, shrubs, and herbaceous plants were present, and scattered trees were mainly found in southern regions. This habitat was not dominated by ice and snow, as is popularly believed, since these regions are thought to have been high-pressure areas at the time. The habitat of the woolly mammoth also supported other grazing herbivores such as the woolly rhinoceros, wild horses and bison. [...] A 2008 study estimated that changes in climate shrank suitable mammoth habitat from 7,700,000 km2 (3,000,000 sq mi) 42,000 years ago to 800,000 km2 (310,000 sq mi) 6,000 years ago. Woolly mammoths survived an even greater loss of habitat at the end of the Saale glaciation 125,000 years ago, and it is likely that humans hunted the remaining populations to extinction at the end of the last glacial period. [...] Several woolly mammoth specimens show evidence of being butchered by humans, which is indicated by breaks, cut-marks, and associated stone tools. It is not known how much prehistoric humans relied on woolly mammoth meat, since there were many other large herbivores available. Many mammoth carcasses may have been scavenged by humans rather than hunted. Some cave paintings show woolly mammoths in structures interpreted as pitfall traps. Few specimens show direct, unambiguous evidence of having been hunted by humans.”
“While frozen woolly mammoth carcasses had been excavated by Europeans as early as 1728, the first fully documented specimen was discovered near the delta of the Lena River in 1799 by Ossip Schumachov, a Siberian hunter. Schumachov let it thaw until he could retrieve the tusks for sale to the ivory trade. [Aargh!] [...] The 1901 excavation of the “Berezovka mammoth” is the best documented of the early finds. It was discovered by the Berezovka River, and the Russian authorities financed its excavation. Its head was exposed, and the flesh had been scavenged. The animal still had grass between its teeth and on the tongue, showing that it had died suddenly. [...] By 1929, the remains of 34 mammoths with frozen soft tissues (skin, flesh, or organs) had been documented. Only four of them were relatively complete. Since then, about that many more have been found.”
ii. Daniel Lambert.
“Daniel Lambert (13 March 1770 – 21 June 1809) was a gaol keeper[n 1] and animal breeder from Leicester, England, famous for his unusually large size. After serving four years as an apprentice at an engraving and die casting works in Birmingham, he returned to Leicester around 1788 and succeeded his father as keeper of Leicester’s gaol. [...] At the time of Lambert’s return to Leicester, his weight began to increase steadily, even though he was athletically active and, by his own account, abstained from drinking alcohol and did not eat unusual amounts of food. In 1805, Lambert’s gaol closed. By this time, he weighed 50 stone (700 lb; 318 kg), and had become the heaviest authenticated person up to that point in recorded history. Unemployable and sensitive about his bulk, Lambert became a recluse.
In 1806, poverty forced Lambert to put himself on exhibition to raise money. In April 1806, he took up residence in London, charging spectators to enter his apartments to meet him. Visitors were impressed by his intelligence and personality, and visiting him became highly fashionable. After some months on public display, Lambert grew tired of exhibiting himself, and in September 1806, he returned, wealthy, to Leicester, where he bred sporting dogs and regularly attended sporting events. Between 1806 and 1809, he made a further series of short fundraising tours.
In June 1809, he died suddenly in Stamford. At the time of his death, he weighed 52 stone 11 lb (739 lb; 335 kg), and his coffin required 112 square feet (10.4 m2) of wood. Despite the coffin being built with wheels to allow easy transport, and a sloping approach being dug to the grave, it took 20 men almost half an hour to drag his casket into the trench, in a newly opened burial ground to the rear of St Martin’s Church.”
“Sensitive about his weight, Daniel Lambert refused to allow himself to be weighed, but sometime around 1805, some friends persuaded him to come with them to a cock fight in Loughborough. Once he had squeezed his way into their carriage, the rest of the party drove the carriage onto a large scale and jumped out. After deducting the weight of the (previously weighed) empty carriage, they calculated that Lambert’s weight was now 50 stone (700 lb; 318 kg), and that he had thus overtaken Edward Bright, the 616-pound (279 kg) “Fat Man of Maldon”, as the heaviest authenticated person in recorded history.
Despite his shyness, Lambert badly needed to earn money, and saw no alternative to putting himself on display, and charging his spectators. On 4 April 1806, he boarded a specially built carriage and travelled from Leicester to his new home at 53 Piccadilly, then near the western edge of London. For five hours each day, he welcomed visitors into his home, charging each a shilling (about £3.5 as of 2014). [...] Lambert shared his interests and knowledge of sports, dogs and animal husbandry with London’s middle and upper classes, and it soon became highly fashionable to visit him, or become his friend. Many called repeatedly; one banker made 20 visits, paying the admission fee on each occasion. [...] His business venture was immediately successful, drawing around 400 paying visitors per day. [...] People would travel long distances to see him (on one occasion, a party of 14 travelled to London from Guernsey),[n 5] and many would spend hours speaking with him on animal breeding.”
“After some months in London, Lambert was visited by Józef Boruwłaski, a 3-foot 3-inch (99 cm) dwarf then in his seventies. Born in 1739 to a poor family in rural Pokuttya, Boruwłaski was generally considered to be the last of Europe’s court dwarfs. He was introduced to the Empress Maria Theresa in 1754, and after a short time residing with deposed Polish king Stanisław Leszczyński, he exhibited himself around Europe, thus becoming a wealthy man. At age 60, he retired to Durham, where he became such a popular figure that the City of Durham paid him to live there and he became one of its most prominent citizens [...] The meeting of Lambert and Boruwłaski, the largest and smallest men in the country, was the subject of enormous public interest”
“There was no autopsy, and the cause of Lambert’s death is unknown. While many sources say that he died of a fatty degeneration of the heart or of stress on his heart caused by his bulk, his behaviour in the period leading to his death does not match that of someone suffering from cardiac insufficiency; witnesses agree that on the morning of his death he appeared well, before he became short of breath and collapsed. Bondeson (2006) speculates that the most consistent explanation of his death, given his symptoms and medical history, is that he had a sudden pulmonary embolism.“
“The exposed geology of the Capitol Reef area presents a record of mostly Mesozoic-aged sedimentation in an area of North America in and around Capitol Reef National Park, on the Colorado Plateau in southeastern Utah.
Nearly 10,000 feet (3,000 m) of sedimentary strata are found in the Capitol Reef area, representing nearly 200 million years of geologic history of the south-central part of the U.S. state of Utah. These rocks range in age from Permian (as old as 270 million years old) to Cretaceous (as young as 80 million years old.) Rock layers in the area reveal ancient climates as varied as rivers and swamps (Chinle Formation), Sahara-like deserts (Navajo Sandstone), and shallow ocean (Mancos Shale).
The area’s first known sediments were laid down as a shallow sea invaded the land in the Permian. At first sandstone was deposited but limestone followed as the sea deepened. After the sea retreated in the Triassic, streams deposited silt before the area was uplifted and underwent erosion. Conglomerate followed by logs, sand, mud and wind-transported volcanic ash were later added. Mid to Late Triassic time saw increasing aridity, during which vast amounts of sandstone were laid down along with some deposits from slow-moving streams. As another sea started to return it periodically flooded the area and left evaporite deposits. Barrier islands, sand bars and later, tidal flats, contributed sand for sandstone, followed by cobbles for conglomerate and mud for shale. The sea retreated, leaving streams, lakes and swampy plains to become the resting place for sediments. Another sea, the Western Interior Seaway, returned in the Cretaceous and left more sandstone and shale only to disappear in the early Cenozoic.”
“The Laramide orogeny compacted the region from about 70 million to 50 million years ago and in the process created the Rocky Mountains. Many monoclines (a type of gentle upward fold in rock strata) were also formed by the deep compressive forces of the Laramide. One of those monoclines, called the Waterpocket Fold, is the major geographic feature of the park. The 100 mile (160 km) long fold has a north-south alignment with a steeply east-dipping side. The rock layers on the west side of the Waterpocket Fold have been lifted more than 7,000 feet (2,100 m) higher than the layers on the east. Thus older rocks are exposed on the western part of the fold and younger rocks on the eastern part. This particular fold may have been created due to movement along a fault in the Precambrian basement rocks hidden well below any exposed formations. Small earthquakes centered below the fold in 1979 may be from such a fault. [...] Ten to fifteen million years ago the entire region was uplifted several thousand feet (well over a kilometer) by the creation of the Colorado Plateaus. This time the uplift was more even, leaving the overall orientation of the formations mostly intact. Most of the erosion that carved today’s landscape occurred after the uplift of the Colorado Plateau with much of the major canyon cutting probably occurring between 1 and 6 million years ago.”
Apollonius of Perga (ca. 262 BC – ca. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, “Tangencies”); this work has been lost, but a 4th-century report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them [...] and each solution circle encloses or excludes the three given circles in a different way [...] The general statement of Apollonius’ problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size. These objects may be arranged in any way and may cross one another; however, they are usually taken to be distinct, meaning that they do not coincide. Solutions to Apollonius’ problem are sometimes called Apollonius circles, although the term is also used for other types of circles associated with Apollonius. [...] A rich repertoire of geometrical and algebraic methods have been developed to solve Apollonius’ problem, which has been called “the most famous of all” geometry problems.“
v. Globular cluster.
“A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.
Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Large galaxies can have more: Andromeda, for instance, may have as many as 500. [...]
Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf galaxy and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy’s globular clusters might have been acquired in the past.
Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear.”
Before I started reading the book I was considering whether it’d be worth it, as a book like this might have little to offer for someone with my background – I’ve had a few stats courses at this point, and it’s not like the specific topic of medical statistics is completely unknown to me; for example I read an epidemiology textbook just last year, and Hill and Glied and Smith covered related topics as well. It wasn’t that I thought there’s not a lot of medical statistics I don’t already know – there is – it was more of a concern that this specific (type of) book might not be the book to read if I wanted to learn a lot of new stuff in this area.
Disregarding the specific medical context of the book I knew a lot of stuff about many of the topics covered. To take an example, Bartholomew’s book devoted a lot of pages to the question of how to handle missing data in a sample, a question this book devotes 5 sentences to. There are a lot of details missing here and the coverage is not very deep. As I hint at in the goodreads review, I think the approach applied in the book is to some extent simply mistaken; I don’t think this (many chapters on different topics, each chapter 2-3 pages long) is a good way to write a statistics textbook. The many different chapters on a wide variety of topics give you the impression that the authors have tried to maximize the amount of people who might get something out of this book, which may have ended up meaning that few people will actually get much out of it. On the plus side there are illustrated examples of many of the statistical methods used in the book, and you also get (some of) the relevant formulas for calculating e.g. specific statistics – but you get little understanding of the details of why this works, when it doesn’t, and what happens when it doesn’t. I already mentioned Bartholomew’s book – many other textbooks written about topics which they manage to cover in their two- or three-page chapters could be mentioned as well – examples include publications such as this, this and this.
Given the way the book starts out (which different types of data exist? How do you calculate an average and what is a standard deviation?) I think the people most likely to be reading a book like this are people who have a very limited knowledge of statistics and data analysis – and when people like that read stats books, you need to be very careful with your wording and assumptions. Maybe I’m just a grumpy old man, but I’m not sure I think the authors are careful enough. A couple of examples:
“Statistical modelling includes the use of simple and multiple linear regression, polynomial regression, logistic regression and methods that deal with survival data. All these methods rely on generating the mathematical model that describes the relationship between two or more variables. In general, any model can be expressed in the form:
g(Y) = a + b1x1 + b2x2 + … + bkxk
where Y is the fitted value of the dependent variable, g(.) is some optional transformation of it (for example, the logit transformation), xl, . . . , xk are the predictor or explanatory variables”
(In case you were wondering, it took me 20 minutes to find out how to lower those 1’s and 2’s because it’s not a standard wordpress function and you need to really want to find out how to do this in order to do it. The k’s still look like crap, but I’m not going to spend more time trying to figure out how to make this look neat. I of course could not copy the book formula into the post, or I would have done that. As I’ve pointed out many times, it’s a nightmare to cover mathematical topics on a blog like this. Yeah, I know Terry Tao also blogs on wordpress, but presumably he writes his posts in a different program – I’m very much against the idea of doing this, even if I am sometimes – in situations like these – seriously reconsidering whether I should do that.)
Let’s look closer at this part again: “In general, any model can be expressed…”
This choice of words and the specific example is the sort of thing I have in mind. If you don’t know a lot about data analysis and you read a statement like this literally, which is the sort of thing I for one am wont to do, you’ll conclude that there’s no such thing as a model which is non-linear in its parameters. But there are a lot of models like that. Imprecise language like this can be incredibly frustrating because it will lead either to confusion later on, or, if people don’t read another book on any of these topics again, severe overconfidence and mistaken beliefs due to hidden assumptions.
Here’s another example from chapter 28, on ‘Performing a linear regression analysis':
“Checking the assumptions
For each observed value of x, the residual is the observed y minus the corresponding fitted Y. Each residual may be either positive or negative. We can use the residuals to check the following assumptions underlying linear regression.
1 There is a linear relationship between x and y: Either plot y against x (the data should approximate a straight line), or plot the residuals against x (we should observe a random scatter of points rather than any systematic pattern).
2 The observations are independent: the observations are independent if there is no more than one pair of observations on each individual.”
This is not good. Arguably the independence assumption is in some contexts best conceived of as an in practice untestable assumption, but regardless of whether it ‘really’ is or not there are a lot of ways in which this assumption may be violated, and observations not being derived from the same individual is not a sufficient requirement for establishing independence. Assuming otherwise is potentially really problematic.
Here’s another example:
“Some words of comfort
Do not worry if you find the theory underlying probability distributions complex. Our experience demonstrates that you want to know only when and how to use these distributions. We have therefore outlined the essentials, and omitted the equations that define the probability distributions. You will find that you only need to be familiar with the basic ideas, the terminology and, perhaps (although infrequently in this computer age), know how to refer to the tables.”
I found this part problematic. If you want to do hypothesis testing using things like the Chi-squared distribution or the F-test (both ‘covered’, sort of, in the book), you need to be really careful about details like the relevant degrees of freedom and how these may depend on what you’re doing with the data, and stuff like this is sometimes not obvious – not even to people who’ve worked with the equations (well, sometimes it is obvious, but it’s easy to forget to correct for estimated parameters and you can’t always expect the program to do this for you, especially not in more complex model frameworks). My position is that if you’ve never even seen the relevant equations, you have no business conducting anything but the most basic of analyses involving these distributions. Of course a person who’s only read this book would not be able to do more than that, but even so instead of ‘some words of comfort’ I’d much rather have seen ‘some words of caution’.
One last one:
* Categorical data – It is relatively easy to check categorical data, as the responses for each variable can only take one of a number of limited values. Therefore, values that are not allowable must be errors.”
Nothing else is said about error checking of categorical data in this specific context, so it would be natural to assume from reading this that if you simply check whether values are ‘allowable’ or not, this is sufficient to catch all the errors. But this is a completely uninformative statement, as a key term remains undefined – neglected is the question of how to define (observation-specific-) ‘allowability’ in the first place, which is the real issue; a proper error-finding algorithm should apply a precise and unambiguous definition of this term, and how to (/implicitly?) construct/apply such an algoritm is likely to sometimes be quite hard, especially when multiple categories are used and allowed and the category dimension in question is hard to cross-check against other variables. Reading the above sequence, it’d be easy for the reader to assume that this is all very simple and easy.
Oh well, all this said the book did had some good stuff as well. I’ve added some further comments and observations from the book below, with which I did not ‘disagree’ (to the extent that this is even possible). It should be noted that the book has a lot of focus on hypothesis testing and (/how to conduct) different statistical tests, and very little about statistical modelling. Many different tests are either mentioned and/or explicitly covered in the book, which aside from e.g. standard z-, t- and F-tests also include things like e.g. McNemar’s test, Bartlett’s test, the sign test, and the Wilcoxon rank-sum test, most of which were covered – I realized after having read the book – in the last part of the first statistics text I read, a part I was not required to study and so technically hadn’t read. So I did come across some new stuff while reading the book. Those specific parts were actually some of the parts of the book I liked best, because they contained stuff I didn’t already know, and not just stuff which I used to know but had forgot about. The few additional quotes added below do to some extent illustrate what the book is like, but it should also be kept in mind that they’re perhaps also not completely ‘fair’, in a way, in terms of providing a balanced and representative sample of the kind of stuff included in the publication; there are many (but perhaps not enough..) equations along the way (which I’m not going to blog, for reasons already mentioned), and the book includes detailed explanations and illustrations of how to conduct specific tests – it’s quite ‘hands-on’ in some respects, and a lot of tools will be added to the toolbox of someone who’s not read a similar publication before.
“Generally, we make comparisons between individuals in different groups. For example, most clinical trials (Topic 14) are parallel trials, in which each patient receives one of the two (or occasionally more) treatments that are being compared, i.e. they result in between-individual comparisons.
Because there is usually less variation in a measurement within an individual than between different individuals (Topic 6), in some situations it may be preferable to consider using each individual as hidher own control. These within-individual comparisons provide more precise comparisons than those from between-individual designs, and fewer individuals are required for the study to achieve the same level of precision. In a clinical trial setting, the crossover design is an example of a within-individual comparison; if there are two treatments, every individual gets each treatment, one after the other in a random order to eliminate any effect of calendar time. The treatment periods are separated by a washout period, which allows any residual effects (carry-over) of the previous treatment to dissipate. We analyse the difference in the responses on the two treatments for each individual. This design can only be used when the treatment temporarily alleviates symptoms rather than provides a cure, and the response time is not prolonged.”
“A cohort study takes a group of individuals and usually follows them forward in time, the aim being to study whether exposure to a particular aetiological factor will affect the incidence of a disease outcome in the future [...]
Advantages of cohort studies
*The time sequence of events can be assessed.
*They can provide information on a wide range of outcomes.
*It is possible to measure the incidence/risk of disease directly.
*It is possible to collect very detailed information on exposure to a wide range of factors.
*It is possible to study exposure to factors that are rare.
*Exposure can be measured at a number of time points, so that changes in exposure over time can be studied. There is reduced recall and selection bias compared with case-control studies (Topic 16).
Disadvantages of cohort studies
*In general, cohort studies follow individuals for long periods of time, and are therefore costly to perform.
*Where the outcome of interest is rare, a very large sample size is needed.
*As follow-up increases, there is often increased loss of patients as they migrate or leave the study, leading to biased results. *As a consequence of the long time-scale, it is often difficult to maintain consistency of measurements and outcomes over time. [...]
*It is possible that disease outcomes and their probabilities, or the aetiology of disease itself, may change over time.”
“A case-control study compares the characteristics of a group of patients with a particular disease outcome (the cases) to a group of individuals without a disease outcome (the controls), to see whether any factors occurred more or less frequently in the cases than the controls [...] Many case-control studies are matched in order to select cases and controls who are as similar as possible. In general, it is useful to sex-match individuals (i.e. if the case is male, the control should also be male), and, sometimes, patients will be age-matched. However, it is important not to match on the basis of the risk factor of interest, or on any factor that falls within the causal pathway of the disease, as this will remove the ability of the study to assess any relationship between the risk factor and the disease. Unfortunately, matching [means] that the effect on disease of the variables that have been used for matching cannot be studied.”
“Advantages of case-control studies
“quick, cheap and easy [...] particularly suitable for rare diseases. [...] A wide range of risk factors can be investigated. [...] no loss to follow-up.
Disadvantages of case-control studies
Recall bias, when cases have a differential ability to remember certain details about their histories, is a potential problem. For example, a lung cancer patient may well remember the occasional period when he/she smoked, whereas a control may not remember a similar period. [...] If the onset of disease preceded exposure to the risk factor, causation cannot be inferred. [...] Case-control studies are not suitable when exposures to the risk factor are rare.”
“The P-value is the probability of obtaining our results, or something more extreme, if the null hypothesis is true. The null hypothesis relates to the population of interest, rather than the sample. Therefore, the null hypothesis is either true or false and we cannot interpret the P-value as the probability that the null hypothesis is true.”
“Hypothesis tests which are based on knowledge of the probability distributions that the data follow are known as parametric tests. Often data do not conform to the assumptions that underly these methods (Topic 32). In these instances we can use non-parametric tests (sometimes referred to as distribution-free tests, or rank methods). [...] Non-parametric tests are particularly useful when the sample size is small [...], and when the data are measured on a categorical scale. However, non-parametric tests are generally wasteful of information; consequently they have less power [...] A number of factors have a direct bearing on power for a given test.
*The sample size: power increases with increasing sample size. [...]
*The variability of the observations: power increases as the variability of the observations decreases [...]
*The effect of interest: the power of the test is greater for larger effects. A hypothesis test thus has a greater chance of detecting a large real effect than a small one.
*The significance level: the power is greater if the significance level is larger”
“The statistical use of the word ‘regression’ derives from a phenomenon known as regression to the mean, attributed to Sir Francis Galton in 1889. He demonstrated that although tall fathers tend to have tall sons, the average height of the sons is less than that of their tall fathers. The average height of the sons has ‘regressed’ or ‘gone back’ towards the mean height of all the fathers in the population. So, on average, tall fathers have shorter (but still tall) sons and short fathers have taller (but still short) sons.
We observe regression to the mean in screening and in clinical trials, when a subgroup of patients may be selected for treatment because their levels of a certain variable, say cholesterol, are extremely high (or low). If the measurement is repeated some time later, the average value for the second reading for the subgroup is usually less than that of the first reading, tending towards (i.e. regressing to) the average of the age- and sex-matched population, irrespective of any treatment they may have received. Patients recruited into a clinical trial on the basis of a high cholesterol level on their first examination are thus likely to show a drop in cholesterol levels on average at their second examination, even if they remain untreated during this period.”
“A systematic review is a formalized and stringent process of combining the information from all relevant studies (both published and unpublished) of the same health condition; these studies are usually clinical trials [...] of the same or similar treatments but may be observational studies [...] a meta-analysis, because of its inflated sample size, is able to detect treatment effects with greater power and estimate these effects with greater precision than any single study. Its advantages, together with the introduction of meta-analysis software, have led meta-analyses to proliferate. However, improper use can lead to erroneous conclusions regarding treatment efficacy. The following principal problems should be thoroughly investigated and resolved before a meta-analysis is performed.
*Publication bias - the tendency to include in the analysis only the results from published papers; these favour statistically significant findings.
*Clinical heterogeneity - in which differences in the patient population, outcome measures, definition of variables, and/or duration of follow-up of the studies included in the analysis create problems of non-compatibility.
*Quality differences - the design and conduct of the studies may vary in their quality. Although giving more weight to the better studies is one solution to this dilemma, any weighting system can be criticized on the grounds that it is arbitrary.
*Dependence - the results from studies included in the analysis may not be independent, e.g. when results from a study are published on more than one occasion.”
“Mathematical models underpin much ecological theory, [...] [y]et most students of ecology and environmental science receive much less formal training in mathematics than their counterparts in other scientific disciplines. Motivating both graduate and undergraduate students to study ecological dynamics thus requires an introduction which is initially accessible with limited mathematical and computational skill, and yet offers glimpses of the state of the art in at least some areas. This volume represents our attempt to reconcile these conflicting demands [...] Ecology is the branch of biology that deals with the interaction of living organisms with their environment. [...] The primary aim of this book is to develop general theory for describing ecological dynamics. Given this aspiration, it is useful to identify questions that will be relevant to a wide range of organisms and/or habitats. We shall distinguish questions relating to individuals, populations, communities, and ecosystems. A population is all the organisms of a particular species in a given region. A community is all the populations in a given region. An ecosystem is a community related to its physical and chemical environment. [...] Just as the physical and chemical properties of materials are the result of interactions involving individual atoms and molecules, so the dynamics of populations and communities can be interpreted as the combined effects of properties of many individuals [...] All models are (at best) approximations to the truth so, given data of sufficient quality and diversity, all models will turn out to be false. The key to understanding the role of models in most ecological applications is to recognise that models exist to answer questions. A model may provide a good description of nature in one context but be woefully inadequate in another. [...] Ecology is no different from other disciplines in its reliance on simple models to underpin understanding of complex phenomena. [...] the real world, with all its complexity, is initially interpreted through comparison with the simplistic situations described by the models. The inevitable deviations from the model predictions [then] become the starting point for the development of more specific theory.”
I haven’t blogged this book yet even if it’s been a while since I finished it, and I figured I ought to talk a little bit about it now. As pointed out on goodreads, I really liked the book. It’s basically a math textbook for biologists which deals with how to set up models in a specific context, that dealing with questions pertaining to ecological dynamics; having read the above quote you should at this point at least have some idea which kind of stuff this field deals with. Here are a few links to examples of applications mentioned/covered in the book which may give you a better idea of the kinds of things covered.
There are 9 chapters in the book, and only the introductory chapter has fewer than 50 ‘named’ equations – most have around 70-80 equations, and 3 of them have more than 100. I have tried to avoid equations in this post in part because it’s hell to deal with them in wordpress, so I’ll be leaving out a lot of stuff in my coverage. Large chunks of the coverage was to some extent review but there was also some new stuff in there. The book covers material both intended for undergraduates and graduates, and even if the book is presumably intended for biology majors many of the ideas also can be ‘transferred’ to other contexts where the same types of specific modelling frameworks might be applied; for example there are some differences between discrete-time models and continuous-time models, and those differences apply regardless of whether you’re modelling animal behaviour or, say, human behaviour. A local stability analysis looks quite similar in the contexts of an economic model and an ecological model. Etc. I’ve tried to mostly talk about rather ‘general stuff’ in this coverage, i.e. model concepts and key ideas covered in the book which might also be applicable in other fields of research as well. I’ve tried to keep things reasonably simple in this post, and I’ve only talked about stuff from the first three chapters.
“The simplest ecological models, called deterministic models, make the assumption that if we know the present condition of a system, we can predict its future. Before we can begin to formulate such a model, we must decide what quantities, known as state variables, we shall use to describe the current condition of the system. This choice always involves a subtle balance of biological realism (or at least plausibility) against mathematical complexity. [...] The first requirement in formulating a usable model is [...] to decide which characteristics are dynamically important in the context of the questions the model seeks to answer. [...] The diversity of individual characteristics and behaviours implies that without considerable effort at simplification, a change of focus towards communities will be accompanied by an explosive increase in model complexity. [...] A dynamical model is a mathematical statement of the rules governing change. The majority of models express these rules either as an update rule, specifying the relationship between the current and future state of the system, or as a differential equation, specifying the rate of change of the state variables. [...] A system with [the] property [that the update rule does not depend on time] is said to be autonomous. [...] [If the update rule depends on time, the models are called non-autonomous].”
“Formulation of a dynamic model always starts by identifying the fundamental processes in the system under investigation and then setting out, in mathematical language, the statement that changes in system state can only result from the operation of these processes. The “bookkeeping” framework which expresses this insight is often called a conservation equation or a balance equation. [...] Writing down balance equations is just the first step in formulating an ecological model, since only in the most restrictive circumstances do balance equations on their own contain enough information to allow prediction of future values of state variables. In general, [deterministic] model formulation involves three distinct steps: *choose state variables, *derive balance equations, *make model-specific assumptions.
Selection of state variables involves biological or ecological judgment [...] Deriving balance equations involves both ecological choices (what processes to include) and mathematical reasoning. The final step, the selection of assumptions particular to any one model, is left to last in order to facilitate model refinement. For example, if a model makes predictions that are at variance with observation, we may wish to change one of the model assumptions, while still retaining the same state variables and processes in the balance equations. [...] a remarkably good approximation to [...] stochastic dynamics is often obtained by regarding the dynamics as ‘perturbations’ of a non-autonomous, deterministic system. [...] although randomness is ubiquitous, deterministic models are an appropriate starting point for much ecological modelling. [...] even where deterministic models are inadequate, an essential prerequisite to the formulation and analysis of many complex, stochastic models is a good understanding of a deterministic representation of the system under investigation.”
“Faced with an update rule or a balance equation describing an ecological system, what do we do? The most obvious line of attack is to attempt to find an analytical solution [...] However, except for the simplest models, analytical solutions tend to be impossible to derive or to involve formulae so complex as to be completely unhelpful. In other situations, an explicit solution can be calculated numerically. A numerical solution of a difference equation is a table of values of the state variable (or variables) at successive time steps, obtained by repeated application of the update rule [...] Numerical solutions of differential equations are more tricky [but sophisticated methods for finding them do exist] [...] for simple systems it is possible to obtain considerable insight by ‘numerical experiments’ involving solutions for a number of parameter values and/or initial conditions. For more complex models, numerical analysis is typically the only approach available. But the unpleasant reality is that in the vast majority of investigations it proves impossible to obtain complete or near-complete information about a dynamical system, either by deriving analytical solutions or by numerical experimentation. It is therefore reassuring that over the past century or so, mathematicians have developed methods of determining the qualitative properties of the solutions of dynamic equations, and thus answering many questions [...] without explicitly solving the equations concerned.”
“[If] the long-term behaviour of the state variable is independent of the initial condition [...] the ‘end state’ [...] is known as an attractor. [...] Equilibrium states need not be attractors; they can be repellers [as well] [...] if a dynamical system has an equilibrium state, any initial condition other than the exact equilibrium value may lead to the state variable converging towards the equilibrium or diverging away from it. We characterize such equilibria as stable and unstable respectively. In some models all initial conditions result in the state variable eventually converging towards a single equilibrium value. We characterize such equilibria as globally stable. An equilibrium that is approached only from a subset of all possible initial conditions (often those close to the equilibrium itself) is said to be locally stable. [...] The combination of non-periodic solutions and sensitive dependence on initial conditions is the signature of the pattern of behaviour known to mathematicians as chaos.
“Most variables and parameters in models have units. [...] However, the behaviour of a natural system cannot be affected by the units in which we chose to measure the quantities we use to describe it. This implies that it should be possible to write down the defining equations of a model in a form independent of the units we use. For any dynamical equation to be valid, the quantities being equated must be measured in the same units. How then do we restate such an equation in a form which is unaffected by our choice of units? The answer lies in identifying a natural scale or base unit for each quantity in the equations and then using the ratio of each variable to its natural scale in our dynamic description. Since such ratios are pure numbers, we say that they are dimensionless. If a dynamic equation couched in terms of dimensionless variables is to be valid, then both sides of any equality must likewise be dimensionless. [...] the process of non-dimensionalisation, which we call dimensional analysis, can [...] yield information on system dynamics. [...] Since there is no unique dimensionless form for any set of dynamical equations, it is tempting to cut short the scaling process by ‘setting some parameter(s) equal to one’. Even experienced modellers make embarrasing blunders doing this, and we strongly recommend a systematic [...] approach [...] The key element in the scaling process is the selection of appropriate base units – the optimal choice being dependent on the questions motivating our study.”
“The starting point for selecting the appropriate formalism [in the context of the time dimension] must [...] be recognition that real ecological processes operate in continuous time. Discrete-time models make some approximation to the outcome of these processes over a finite time interval, and should thus be interpreted with care. This caution is particularly important as difference equations are intuitively appealing and computationally simple. [...] incautious empirical modelling with difference equations can have surprising (adverse) consequences. [...] where the time increment of a discrete-time model is an arbitrary modelling choice, model predictions should be shown to be robust against changes in the value chosen.”
“Of the almost limitless range of relations between population flux and local density, we shall discuss only two extreme possibilities. Advection occurs when an external physical flow (such as an ocean current) transports all the members of the population past the point, x [in a spatially one-dimensional model], with essentially the same velocity, v. [...] Diffusion occurs when the members of the population move at random. [...] This leads to a net flow rate which is proportional to the spatial gradient of population density, with a constant of proportionality D, which we call the diffusion constant. [...] the net flow [in this case] takes individuals from regions of high density to regions of low density” [...] [...some remarks about reaction-diffusion models, which I'd initially thought I'd cover here but which turned out to be too much work to deal with (the coverage is highly context-dependent)].
This is a neat little book in the Springer Briefs in Statistics series. The author is David J Bartholomew, a former statistics professor at the LSE. I wrote a brief goodreads review, but I thought that I might as well also add a post about the book here. The book covers topics such as the EM algorithm, Gibbs sampling, the Metropolis–Hastings algorithm and the Rasch model, and it assumes you’re familiar with stuff like how to do ML estimation, among many other things. I had some passing familiarity with many of the topics he talks about in the book, but I’m sure I’d have benefited from knowing more about some of the specific topics covered. Because large parts of the book is basically unreadable by people without a stats background I wasn’t sure how much of it it made sense to cover here, but I decided to talk a bit about a few of the things which I believe don’t require you to know a whole lot about this area.
“Modern statistics is built on the idea of models—probability models in particular. [While I was rereading this part, I was reminded of this quote which I came across while finishing my most recent quotes post: "No scientist is as model minded as is the statistician; in no other branch of science is the word model as often and consciously used as in statistics." Hans Freudenthal.] The standard approach to any new problem is to identify the sources of variation, to describe those sources by probability distributions and then to use the model thus created to estimate, predict or test hypotheses about the undetermined parts of that model. [...] A statistical model involves the identification of those elements of our problem which are subject to uncontrolled variation and a specification of that variation in terms of probability distributions. Therein lies the strength of the statistical approach and the source of many misunderstandings. Paradoxically, misunderstandings arise both from the lack of an adequate model and from over reliance on a model. [...] At one level is the failure to recognise that there are many aspects of a model which cannot be tested empirically. At a higher level is the failure is to recognise that any model is, necessarily, an assumption in itself. The model is not the real world itself but a representation of that world as perceived by ourselves. This point is emphasised when, as may easily happen, two or more models make exactly the same predictions about the data. Even worse, two models may make predictions which are so close that no data we are ever likely to have can ever distinguish between them. [...] All model-dependant inference is necessarily conditional on the model. This stricture needs, especially, to be borne in mind when using Bayesian methods. Such methods are totally model-dependent and thus all are vulnerable to this criticism. The problem can apparently be circumvented, of course, by embedding the model in a larger model in which any uncertainties are, themselves, expressed in probability distributions. However, in doing this we are embarking on a potentially infinite regress which quickly gets lost in a fog of uncertainty.”
“Mixtures of distributions play a fundamental role in the study of unobserved variables [...] The two important questions which arise in the analysis of mixtures concern how to identify whether or not a given distribution could be a mixture and, if so, to estimate the components. [...] Mixtures arise in practice because of failure to recognise that samples are drawn from several populations. If, for example, we measure the heights of men and women without distinction the overall distribution will be a mixture. It is relevant to know this because women tend to be shorter than men. [...] It is often not at all obvious whether a given distribution could be a mixture [...] even a two-component mixture of normals, has 5 unknown parameters. As further components are added the estimation problems become formidable. If there are many components, separation may be difficult or impossible [...] [To add to the problem,] the form of the distribution is unaffected by the mixing [in the case of the mixing of normals]. Thus there is no way that we can recognise that mixing has taken place by inspecting the form of the resulting distribution alone. Any given normal distribution could have arisen naturally or be the result of normal mixing [...] if f(x) is normal, there is no way of knowing whether it is the result of mixing and hence, if it is, what the mixing distribution might be.”
“Even if there is close agreement between a model and the data it does not follow that the model provides a true account of how the data arose. It may be that several models explain the data equally well. When this happens there is said to be a lack of identifiability. Failure to take full account of this fact, especially in the social sciences, has led to many over-confident claims about the nature of social reality. Lack of identifiability within a class of models may arise because different values of their parameters provide equally good fits. Or, more seriously, models with quite different characteristics may make identical predictions. [...] If we start with a model we can predict, albeit uncertainly, what data it should generate. But if we are given a set of data we cannot necessarily infer that it was generated by a particular model. In some cases it may, of course, be possible to achieve identifiability by increasing the sample size but there are cases in which, no matter how large the sample size, no separation is possible. [...] Identifiability matters can be considered under three headings. First there is lack of parameter identifiability which is the most common use of the term. This refers to the situation where there is more than one value of a parameter in a given model each of which gives an equally good account of the data. [...] Secondly there is what we shall call lack of model identifiability which occurs when two or more models make exactly the same data predictions. [...] The third type of identifiability is actually the combination of the foregoing types.
Mathematical statistics is not well-equipped to cope with situations where models are practically, but not precisely, indistinguishable because it typically deals with things which can only be expressed in unambiguously stated theorems. Of necessity, these make clear-cut distinctions which do not always correspond with practical realities. For example, there are theorems concerning such things as sufficiency and admissibility. According to such theorems, for example, a proposed statistic is either sufficient or not sufficient for some parameter. If it is sufficient it contains all the information, in a precisely defined sense, about that parameter. But in practice we may be much more interested in what we might call ‘near sufficiency’ in some more vaguely defined sense. Because we cannot give a precise mathematical definition to what we mean by this, the practical importance of the notion is easily overlooked. The same kind of fuzziness arises with what are called structural eqation models (or structural relations models) which have played a very important role in the social sciences. [...] we shall argue that structural equation models are almost always unidentifiable in the broader sense of which we are speaking here. [...] [our results] constitute a formidable argument against the careless use of structural relations models. [...] In brief, the valid use of a structural equations model requires us to lean very heavily upon assumptions about which we may not be very sure. It is undoubtedly true that if such a model provides a good fit to the data, then it provides a possible account of how the data might have arisen. It says nothing about what other models might provide an equally good, or even better fit. As a tool of inductive inference designed to tell us something about the social world, linear structural relations modelling has very little to offer.”
“It is very common for data to be missing and this introduces a risk of bias if inferences are drawn from incomplete samples. However, we are not usually interested in the missing data themselves but in the population characteristics to whose estimation those values were intended to contribute. [...] A very longstanding way of dealing with missing data is to fill in the gaps by some means or other and then carry out the standard analysis on the completed data set. This procedure is known as imputation. [...] In its simplest form, each missing data point is replaced by a single value. Because there is, inevitably, uncertainty about what the imputed values should be, one can do better by substituting a range of plausible values and comparing the results in each case. This is known as multiple imputation. [...] missing values may occur anywhere and in any number. They may occur haphazardly or in some pattern. In the latter case, the pattern may provide a clue to the mechanism underlying the loss of data and so suggest a method for dealing with it. The conditional distribution which we have supposed might be the basis of imputation depends, of course, on the mechanism behind the loss of data. From a practical point of view the detailed information necessary to determine this may not be readily obtainable or, even, necessary. Nevertheless, it is useful to clarify some of the issues by introducing the idea of a probability mechanism governing the loss of data. This will enable us to classify the problems which would have to be faced in a more comprehensive treatment. The simplest, if least realistic approach, is to assume that the chance of being missing is the same for all elements of the data matrix. In that case, we can, in effect, ignore the missing values [...] Such situations are designated as MCAR which is an acronym for Missing Completely at Random. [...] In the smoking example we have supposed that men are more likely to refuse [to answer] than women. If we go further and assume that there are no other biasing factors we are, in effect, assuming that ‘missingness’ is completely at random for men and women, separately. This would be an example of what is known as Missing at Random(MAR) [...] which means that the missing mechanism depends on the observed variables but not on those that are missing. The final category is Missing Not at Random (MNAR) which is a residual category covering all other possibilities. This is difficult to deal with in practice unless one has an unusually complete knowledge of the missing mechanism.
Another term used in the theory of missing data is that of ignorability. The conditional distribution of y given x will, in general, depend on any parameters of the distribution of M [the variable we use to describe the mechanism governing the loss of observations] yet these are unlikely to be of any practical interest. It would be convenient if this distribution could be ignored for the purposes of inference about the parameters of the distribution of x. If this is the case the mechanism of loss is said to be ignorable. In practice it is acceptable to assume that the concept of ignorability is equivalent to that of MAR.”
i. “A slave dreams of freedom, a free man dreams of wealth, the wealthy dream of power, and the powerful dream of freedom.” (Andrzej Majewski)
ii. “The tragedy of a thoughtless man is not that he doesn’t think, but that he thinks that he’s thinking.” (-ll-)
iii. “Money is the necessity that frees us from necessity.” (W. H. Auden)
iv. “Young people, who are still uncertain of their identity, often try on a succession of masks in the hope of finding the one which suits them — the one, in fact, which is not a mask.” (-ll-)
v. “The aphorist does not argue or explain, he asserts; and implicit in his assertion is a conviction that he is wiser and more intelligent than his readers.” (-ll-)
vi. “none become at once completely vile.” (William Gifford)
vii. “It is by a wise economy of nature that those who suffer without change, and whom no one can help, become uninteresting. Yet so it may happen that those who need sympathy the most often attract it the least.” (F. H. Bradley)
viii. “He who has imagination without learning has wings but no feet.” (Joseph Joubert)
ix. “It is better to debate a question without settling it than to settle a question without debating it.” (-ll-)
x. “The aim of an argument or discussion should not be victory, but progress.” (-ll-)
xi. “Are you listening to the ones who keep quiet?” (-ll-)
xii. “Writing is closer to thinking than to speaking.” (-ll-)
xiii. “Misery is almost always the result of thinking.” (-ll-)
xiv. “The great inconvenience of new books is that they prevent us from reading the old ones.” (-ll-)
xv. “A good listener is one who helps us overhear ourselves.” (Yahia Lababidi)
xvi. “To suppose, as we all suppose, that we could be rich and not behave as the rich behave, is like supposing that we could drink all day and keep absolutely sober.” (Logan Pearsall Smith)
xvii. “People say that life is the thing, but I prefer reading.” (-ll-)
xviii. “Most men give advice by the bucket, but take it by the grain.” (William Alger)
xix. “There is an instinct that leads a listener to be very sparing of credence when a fact is communicated [...] But give him a fable fresh from the mint of the Mendacity Society [...] and he will not only make affidavit of its truth, but will call any man out who ventures to dispute its authenticity.” (Samuel Blanchard)
xx. “Experience leaves fools as foolish as ever.” (-ll-)
“In a variety of mammals and a few birds, newly immigrated or newly dominant males are known to attack and kill dependent infants [...]. Hrdy (1974) was the first to suggest that this bizarre behaviour was the product of sexual selection: by killing infants they did not sire, these males advanced the timing of the mother’s next oestrus and, owing to their new social position, would have a reasonable probability of siring this female’s next infant. [...] Although this interpretation, and indeed the phenomenon itself, has been hotly debated for decades [...], on balance, this hypothesis provides a far better fit with the observations on primates than any of the alternatives [...] several large-scale studies have estimated that the time gained by the infanticidal male amounts to [25-32] per cent of the mean interbirth interval [...] Because males rarely, if ever, suffer injuries during infanticidal attacks, and because there is no evidence that committing infanticide leads to reduced tenure length, one can safely conclude that, on average, infanticide is an adaptive male strategy. [...] Infanticide often happens when the former dominant male, the most likely sire of most infants even in multi-male groups [...], is eliminated or incapacitated. [...] dominant males are effective protectors of infants as long as they are not ousted or incapacitated.”
“Conceptually, we can distinguish two kinds of mating by females that may reduce the risk of infanticide. First, by mating polyandrously in potentially fertile periods, females can reduce the concentration of paternity in the dominant male, and spread some of it to other males, so that long-term average paternity probabilities will be somewhat below 1 for the dominant male and somewhat above 0 for the subordinates. Second, by mating during periods of non-fertility [...], a female may be able to manipulate the assessment by the various males of their paternity chances, although she obviously cannot change the actual paternity values allocated to the various males. [...] The basic prediction is that females that are vulnerable to infanticide by males should be actively polyandrous whenever potentially infanticidal males are present in the mating pool (i.e. the sexually mature males in the social unit or nearby with which the female can mate, in principle). There is ample evidence that primate females in vulnerable species actively pursue polyandrous matings and that they often engage in matings when fertilisation is unlikely or impossible [...]. Indeed, females often target low-ranking or peripheral males reluctant to mate in the presence of the dominant males, especially during pregnancy. [...] In species vulnerable to infanticide, females often respond to changes in the male cohort of a group with immediate proceptivity, and effectively solicit matings with the new (or newly dominant) male [...] It is in the female’s interest to keep individual males guessing as to the extent to which other males have also mated with her [...] Hence, females should be likely to mate discreetly, especially with subordinate males. [...] We [expect] that matings between females and subordinate males tend to take place out of sight of the dominant male, e.g. at the periphery and away from the group [...] it has been noted for several species that matings between females and subordinate males [do] tend to occur rather surreptuously”
“Even though most primates have concealed ovulations, there is evidence that they use various pre-copulatory mechanisms, such as friendships [...] or increased proximity [...] with favoured males, copulation calls that are likely to attract particular males [...], active solicitation of copulations around the likely conception date [...], as well as changes in chemical signals [...]; unique vocalizations [...]; sexual swellings [...] and increased frequencies of particular behaviour patterns during the peri-ovulatory phase [...] to signal impending ovulation and/or to increase the chances of fertilization by favoured males.” [Recall from the previous post also in this context that which males are actually 'favoured' changes significantly during the cycle].
“Thornhill (1983) suggested that females might exhibit what he called ‘cryptic female choice’ – the differential utilisation of sperm from different males. The term ‘cryptic’ referred to the fact that this choice took place out of sight, inside the female reproductive tract. [...] Cryptic female choice is difficult to demonstrate [as] one has to control for all male effects, such as sperm numbers or differential fertilising ability [...] Cryptic female choice in primates is poorly documented, even though there are theoretical reasons to expect it to be common. [...] The strongest indirect evidence for a mechanism of cryptic female choice in primates is provided by the observation that females of several species of anthropoids (mostly macaques, baboons and chimpanzees) exhibit orgasm [...] Physiological measures during artificially induced orgasms [have] demonstrated the occurence of the same vaginal and uterine contractions that also characterise human orgasm [...] and are thought to accelerate and facilitate sperm transport towards the cervix and ovaries [...] female orgasm was observed more often in macaque pairs including high-ranking males (Troisi & Carosi, 1998). A comparable effect of male social status on female orgasm rates has also been reported for humans [...]. Orgasm therefore has the potential to be used selectively by females to facilitate fertilisation of their eggs by particular males [...] This hypothesis is indirectly supported by the observation that female orgasm apparently does not occur among prosimians [...], but rather among Old World primates, where the potential for coercive matings by multiple males is highest [...]. Seen this way, female primate orgasm may therefore represent an evolutionary response to male sexual coercion that provided females with an edge in the dynamic competition over the control of fertilisation” [Miller's account/explanation was quite different. I think both explanations are rather speculative at this point. Speculative, but interesting.]
“It has long been an established fact in ethology that interactions with social partners influence an individual’s motivational state and vice versa, and, through interactions, its physiological development and condition. For example, the suppression of reproductive processes by the presence of a same-sex conspecific has been documented for many species, including primates. [...] The existence of a conditional [male mating] strategy with different tactics has been demonstrated in several species of mammals. To mention but one clear example: in savannah baboons, a male may decide what tactic to follow in its relationships with females after assessing what others do. Smuts (1985) has shown that dominant males follow a sexual tactic in which they monopolise access to fertile females by contest competition. A subordinate male may use another tactic. He may persuade a female to choose him for mating by rendering services to the female (e.g. protecting her in between-female competition) and thus forming a ‘friendship’ with the female. Similar variation in tactics has been found in other primates (e.g. in rhesus macaques, Berard et al., 1994).”
…And there you probably have at least part of the explanation for why millions of romantically frustrated (…’pathetic’?) human males waste significant parts of their (reproductive) lives catering to the needs of women who already have a sexual partner and are not sexually interested in them – they might not even have been born were it not for the successful application of this type of sit-and-wait strategy on part of some of their ancestors in the past.
The chapter in question has a lot of stuff about male orangutans, and although it’s quite interesting I won’t go much into the details here. I should note however that I think most females will probably prefer the above-mentioned ‘sneaky’ male tactic (I should perhaps note here that in terms of the ‘sneakiness’ of mating strategies, females do pretty well for themselves as well. Indeed in the specific setting it’s not unlikely that it’s actually the females who initiate in a substantial number of cases – see above..) to the mating tactic of unflanged orangutans, which basically amounts to walking around looking for a female unprotected by a flanged male and then raping her when he comes across one. In one sample included in the book of orangutan matings taking place in Tanjung Puting national park (Indonesia), of roughly 20 matings by unflanged males recorded only 1 or 2 (it’s a bar graph) did not involve a female resisting. These guys are great, and apparently really sexy to the opposite gender… The ratio of resisting/not resisting females in the case of the matings involving flanged males was pretty much the reverse; a couple of rapes and ~18-19 unforced mating events. It should be noted that the number of matings achieved by the flanged and unflanged males is roughly similar, so judging from these data approximately half of all matings these female orangutans experience during their lives are forced.
“Especially in long-lived organisms such as primates, a male’s success in competing for mates and protecting his offspring should be affected by the nature of major social decisions, such as whether and when to transfer to other groups or to challenge dominants. Several studies indicate dependence of male decisions about transfer and acquisition of rank on age and local demography [...]. Likewise, our work on male long-tailed macaques [...] indicated a remarkably tight fit between the behavioural decisions of males and expectations based on known determinants of success [...], suggesting that natural selection has endowed males with rules that, on average, produce optimal life-history trajectories (or careers) for a given set of conditions. [...] Most non-human primates live in groups with continuous male-female association ["Only a minority of about 10 per cent of primate species live in pairs" - from a previous chapter], in which group membership of reproductively active (usually non-natal) males can last many years. For a male living in such a mixed-sex group, dominance rank reflects his relative power in excluding others from resources. However, the impact of dominance on mating success is variable [...] Although rank acquisition is usually considered separately from transfer behaviour and mating success, the hypothesis examined here is that they are interdependent [...]. We predict that the degree of paternity concentration in the dominant male, determined by his ability to exclude other males from mating, determines the relative benefits of various modes of acquisition of top rank [...], and that these together determine patterns of male transfer”
“the cost of inbreeding may cause females to avoid mating with male relatives [...]. This tendency has been invoked to explain an apparent female preference for novel (recently immigrated) males”
“a male can attain top rank in a mixed-sex group in three different ways. First, he can defeat the current dominant male during an aggressive challenge [...] Second, he can attain top rank during the formation of a new group[...] A third way to achieve top rank is by default, or through ‘succession’, after the departure or death of the previous top-ranking male, not preceded by challenges from other males”
The chapter which included the above quotes is quite interesting, but in a way also difficult to quote from given the way it is written. They talk about multiple variables which may affect how likely a male is to leave the group in which he was born (for example if there are fewer females in the group, all else equal he’s more likely to leave); which mechanism he’s likely to employ in order to try to achieve top rank in his group, if that’s indeed an option (in small groups they always fight for the top spot and the dominant male will have a very dim view of other mature males trying to encroach upon his territory, whereas in large groups the dominant male is more tolerant of competitors and they’re much less likely to settle things by fighting with each other – the reason why fighting is less common is probably because the male in the latter group is in general unable to monopolize access to the females because of the size of the group, so you to some extent ‘gain less’ by achieving alpha male status), and when he’s likely to act (a young male is stronger than an old male and he can also expect to maintain his tenure as the top male for a longer period of time – so males who try to achieve top rank by fighting for it are likely to be young, whereas males who achieve top rank by other means tend to be older). Whether or not females reproduce in a seasonal pattern also matters. It’s obvious from the data that it’s far from random how and at which point during their lives males make their transfer decisions, and how they settle conflicts about who should get the top spot. The approach in that chapter reminded me a bit of optimal foraging theory stuff, but they didn’t talk about that kind of stuff at all in the chapter. Here’s what they concluded from the data they presented in the chapter:
“We found not only variation between species but also remarkable variation within species, or even populations, in the effect of group size on paternity concentration and thus transfer decisions, as well as mode of rank acquisition and likelihood of natal transfer. This variability suggests that a primate male’s behaviour is guided by a set of conditional rules that allow him to respond to a variety of local situations. [...] Primate males appear to have a set of conditional rules that allow them to respond flexibly to variation in the potential for paternity concentration. Before mounting a challenge, they assess the situation in their current group, and before making their transfer decisions they monitor the situation in multiple potential-target groups, where this is possible.”
I friend pointed me to a Danish article talking about this. I pointed out a few problems and reasons to be skeptical to my friend, and I figured I might as well share a few thoughts on these matters here as well. I do not have access to my library at the present point in time, so this post will be less well sourced than most posts I’ve written on related topics in the past.
i. I’ve had diabetes for over 25 years. A cure for type 1 diabetes has been just around the corner for decades. This is not a great argument for assuming that a cure will not be developed in a few years’ time, but you do at some point become a bit skeptical.
ii. The type of ‘mouse diabetes’ people use when they’re doing research on animal models such as e.g. NOD mice, from which many such ‘breakthroughs’ are derived, is different from ‘human diabetes’. As pointed out in the reddit thread, “Doug’s group alone has cured diabetes in mice nearly a dozen times”. This may or may not be true, but I’m pretty sure that at the present point in time my probability of being cured of diabetes would be significantly higher if I happened to be one of those lab mice.
iii. A major related point often overlooked in contexts like these is that type 1 diabetes is not one disease – it is a group of different disorders all sharing the feature that the disease process involved leads to destruction of the pancreatic beta-cells. At least this is not a bad way to think about it. This potentially important neglected heterogeneity is worth mentioning when we’re talking about cures. To talk about ‘type 1 diabetes’ as if it’s just one disease is a gross simplification, as multiple different, if similar, disease processes are at work in different patients; some people with ‘the disease’ get sick in days or weeks, in others it takes years to get to the point where symptoms develop. Multiple different gene complexes are involved. Prognosis – both regarding the risk of diabetes-related organ damage and the risk of developing ‘other’ autoimmune conditions (‘other’ because it may be the same disease process causing the ‘other’ diseases as well), such as Hashimoto’s thyroiditis – depends to some extent on the mutations involved. This stuff relates also to the question of what we mean by the word ‘cure’ – more on this below. You might argue that although diabetics are different from each other and vary in a lot of ways, the same thing could be said about the sufferers of all kinds of other diseases, such as, say, prostate cancer. So maybe heterogeneity within this particular patient population is not that important. But the point remains that we don’t treat all prostate cancer patients the same way, and that some are much easier to cure than others.
iv. The distinction between types (type 1, type 2) makes it easy to overlook the fact that there are significant within-group heterogeneities, as mentioned above. But the complexity of the processes involved are perhaps even better illustrated by pointing out that even between-group distinctions can also sometimes be quite complicated. The distinction between type 1 and type 2 diabetes is a case in point; usually people say only type 1 is auto-immune, but it was made clear in Sperling et al.’s textbook that that’s not really true; in a minority of type 2 diabetics autoimmune processes are also clearly involved – and this is actually highly relevant as these subgroups of patients have a much worse prognosis than the type 2 diabetics without autoantibody markers, as they’ll on average progress to insulin-dependent disease (uncontrollable by e.g. insulin-sensitizers) much faster than people without an auto-immune disease process. In my experience most people who talk about diabetes online, also well-informed people e.g. in reddit/askscience threads, are not (even?) aware of this. I mention it because it’s one obvious example of how within-group hidden heterogeneities can have huge relevance for which treatment modalities are desirable or useful. You’d expect type 2’s with auto-immune processes involved would need a different sort of ‘cure’ than ‘ordinary type 2’s’. For a little more on different ‘varieties’ of diabetes, see also this and this.
There are as already mentioned also big differences in outcomes between subgroups within the type 1 group; some people with type 1 diabetes will end up with three or four ‘different'(?) auto-immune diseases, whereas others will get lucky and ‘only’ ever get type 1 diabetes. Not only that, we also know that glycemic control differences between those groups do not account for all the variation in between-group differences in outcomes in terms of diabetes-related complications; type 1 diabetics hit by ‘other’ auto-immune processes (e.g. Graves’ disease) tend to be more likely to develop complications to their diabetes than the rest, regardless of glycemic control. Would successful beta-cell transplants, assuming these at some point become feasible, and achieved euglycemia in that patient population still prevent thyroid failure later on? Would the people more severely affected, e.g. people with multiple autoimmune conditions, still develop some of the diabetes-related complications, such as cardiovascular complications, even if they had functional beta cells and were to achieve euglycemia, because those problems may be caused by disease aspects like accelerated atherosclerosis to some extent perhaps unrelated to glycemic control? These are things we really don’t know. It’s very important in that context to note that most diabetics, both type 1 and type 2, die from cardiovascular disease, and that the link between glycemic control and cardiovascular outcomes is much weaker than the one between glycemic control and microvascular complications (e.g., eye disease, kidney disease). There may be reasons why we do not yet have a good picture of just how important euglycemia really is, e.g. because glucose variability and not just average glucose levels may be important in terms of outcomes (I recall seeing this emphasized recently in a paper, but I’m not going to look for a source) – and Hba1c only account for the latter. So maybe it does all come back to glycemic control, it’s just that we don’t have the full picture yet. Maybe. But to the extent that e.g. cardiovascular outcomes – or other complications in diabetics – are unrelated to glycemic control, beta-cell transplants may not improve cardiovascular outcomes at all. One potential cure might be one where diabetics get beta-cell transplants, achieve euglycemia and are able to drop the insulin injections – yet they still die too soon from heart disease because other aspects of the disease process has not been addressed by the ‘cure’. I don’t think at the current point in time that we really know enough about these diseases to really judge if a hypothetical diabetic with functional transplanted beta-cells may not still to some extent be ‘sick’.
v. If your cure requires active suppression of the immune system, not much will really be gained. A to some people perhaps surprising fact is that we already know how to do ‘curative’ pancreas transplants in diabetics, and these are sometimes done in diabetic patients with kidney failure (“In most cases, pancreas transplantation is performed on individuals with type 1 diabetes with end-stage renal disease, brittle diabetes [poor glycemic control, US] and hypoglycaemia unawareness. The majority of pancreas transplantation (>90%) are simultaneous pancreas-kidney transplantation.” – link) – these people would usually be dead without a kidney transplant and as they already have to suffer through all the negative transplant-related effects of immune suppression and so on, the idea is that you might as well switch both defective organs now you’re at it, if they’re both available. But immune suppression sucks and these patients do not have great prognoses so this is not a good way to deal with diabetes in a ‘healthy diabetic'; if rejection problems are not addressed in a much better manner than the ones currently available in whole-organ-transplant cases, the attractiveness of any such type of intervention/’cure’ goes down a lot. In the study they tried to engineer their way around this issue, but whether they’ve been successful in any meaningful way is subject to discussion – I share ‘SirT6”s skepticism at the original reddit link. I’d have to see something like this working in humans for some years before I get too optimistic.
vi. One final aspect is perhaps noting. Even a Complete and Ideal Cure involving beta-cell transplants in a setting where it turns out that everything that goes wrong with all diabetics is really blood-glucose related, is not going to repair the damage that’s already been done. Such aspects will of course matter much more to some people than to others.
Okay, here’s the short version: This book is awesome – I gave it five stars and added it to my list of favourites on goodreads.
It’s the second primatology text I read this year – the first one was Aureli et al.; my coverage of that book can be found here, here and here. I’ve also recently read a few other texts as well which have touched upon arguably semi-related themes; books such as Herrera et al., Gurney and Nisbet, Whitmore and Whitmore, Okasha, Miller, and Bobbi Low. Some of the stuff covered in Holmes et al. turned out to be relevant as well. I mention these books because this book is aimed at graduates in the field (“Sexual Selection in Primates is aimed at graduates and researchers in primatology, animal behaviour, evolutionary biology and comparative psychology“), and although my background is different I have as indicated read some stuff about these kinds of things before – if you know nothing about this stuff, it may be a bit more work for you to read the book than it was for me. I still think you should read it though, as this is the sort of book everybody should read; if they did, people’s opinions about extra-marital sex might change, their understanding of the behavioural strategies people employ when they go about being unfaithful might increase, single moms would find it easier to understand why their dating value is lower than that of their competitors without children, and new dimensions of friendship dynamics – both those involving same-sex individuals and those involving individuals of both sexes – might enter people’s mental model and provide additional angles which might be used by them to help explain why they, or other people, behave the way they do. To take a few examples.
Most humans are probably aware that many males in primate species quite closely related to us habitually engage in activities like baby-killing or rape, and that they do this because such behavioural strategies lead to them being more successful in the fitness context. However they may not be aware that females of those species have implemented behavioural strategies in order to counteract these behaviours; for example females may furtively sleep around with different males in order to confuse the males about who’s the real father of their offspring (you don’t want to kill your own baby), or they may band up with other females, and/or perhaps a strong male, in order to obtain protection from the potential rapists. I mention this in part because a related observation is that it should be clear from observing humans in their natural habitat that most human males are not baby-killers or rapists, and such an observation might easily lead people who have some passing familiarity with the field to think that a lot of the stuff included in a book like this one is irrelevant to human behaviour; a single mom is unlikely to hook up with a guy who kills her infant, so this kind of stuff is probably irrelevant to humans – we are different. I think this is the wrong conclusion to draw. What’s particularly important to note in this context is that counterstrategies are reasonably effective in many primate species, meaning for example that although infanticide does take place in wild primate species, it doesn’t happen that often; we’ve in some respects come a bit further than other species in terms of limiting such behaviours, but in more than a few areas of social behaviour humans actually seem to act in a rather similar manner to those baby-killing rapists and their victims. It’s also really important to observe that sexual conflict is but one of several types of conflicts which organisms such as mammals face, and that the dynamics of such conflicts and aspects like how they are resolved have many cross-species similarities – see Aureli et al. for an overview. It’s difficult and expensive to observe primates in the wild, but when you do it it’s not actually that hard to spot many precursors of- or animal equivalents of various behaviours that humans engage in as well. Some animals are more like us than people like to think, and the common idea that humans are really special and unique on account of our large brains may to some extent be the result of a lack of knowledge about how animals actually behave. Yep, we are different, but perhaps not quite as different as people like to think. Some of the behaviours we like to think of as somehow ‘irreducible’ probably aren’t.
Observations included in a book like this one may well change how you think about many things humans do, at least a little. Humans who are not sexually active have the same evolutionary past as those that are, which means that their behaviours are likely to be and have been shaped by similar mechanisms – an important point being that if even someone like me, who at the moment consider it a likely outcome that I’ll never have sex during my lifetime, is capable of finding stuff covered in a book such as this one to be relevant and useful, there are probably very few people who wouldn’t find some of the stuff in there relevant and useful to some extent. Modern humans face different decision variables and constraints than did our ancestors, but the brains we’re walking around with are to a significant extent best thought of as the brains of our ancestors – they really haven’t changed that much in, say, the last 100.000 years, and some parts of the ‘code’ we walk around with are literally millions of years old. You need to remember to account for stuff like birth control, ‘culture’ and institutions when you’re dealing with human sexual behaviours today, but a lot of other stuff should be included as well, and books like this one will give you another piece of the puzzle. An important piece, I think.
Although there’s a limited amount of mathematics in this book (mostly limited to an infanticide model in chapter 8), as you can imagine given the target group the book is really quite dense. There’s way too much good stuff in this book for me to cover all of it here, and I don’t know at this point how detailed my coverage of the book will end up being. A lot of details will be left out, regardless of how many posts I decide to give this book – more than a few chapters are of such high quality that I could easily devote an entire post to each of them. If the stuff I include in my posts sparks your interest, you’ll probably want to read the rest of the book as well.
“In this review I have emphasised five points that modern students of sexual selection ought to keep in mind. First, the list of mechanisms of sexual selection is longer than just the two most famous examples of male-male combat and female choice. Male mate choice and female-female competition are two frequently noted possibilities. Other between-sex social interactions that can result in sexual selection include male coercion of females [...] and female resistance to male coercion or manipulation [...] sexual selection among females should be as important as male sexual selection to dynamical interactions between the sexes. Sexual selection among females will favour resistance to male attempts to manipulate and control them [...] Second, even when a mechanism of intersexual selection depends on interactions between members of opposite sexes, the important thing for selection is the variance in reproductive success among members of one sex. Think about female mate choice for a moment. Whenever choosers discriminate, mate choice may cause variation among the chosen in mating and reproductive success [...] Thus, mate choice is a mechanism of sexual selection because it theoretically results in variance among individuals of the chosen sex in mating success and perhaps other components of fitness. [...] Third, sexual selection can result in individual tradeoffs among the components of fitness [...] Fourth, for a trait to be under selection, there must be variation in the trait. For sexual selection to operate the trait variation must be among individuals of the same sex. [...] To argue that an opportunity for sexual selection exists, variation among same-sex individuals in reproductive success must exist. Fifth, between-sex variances in reproductive success alone are [...] an insufficient basis for the conclusion that sexual selection operates [...], as within-sex variances may arise because of random, non-heritable factors”
“In summary, sex roles fixed by past selection from anisogamy or from parental investment patterns so that females are choosy and males indiscriminate are currently questionable for many species. The factors that determine whether individuals are choosy or indiscriminate seem relatively under-investigated.” (One factor which does seem to be important is the encounter frequency with potentially mating opposite-sex individuals; this variable (how often do you meet a potential partner?) has been shown to affect the sexual behaviours of individuals in species as diverse as fruit flies, fish and butterflies).
“Because most primates live in stable, long-lasting social groups, pressures for direct sexually selected communication cues may be less than in species with ephemeral mating groups or frequent pairings. Primates are likely to accumulate information about competitors and mates from many sources over a longer time frame. [...] Although there do appear to be some communication signals that may be sexually selected, it may be best to consider these signals as biasing factors rather than the determinants of mate choice. For primates, human and non-human, as well as for Japanese quails, gerbils, rats and blue guramis, there is more to successful reproduction than simply responding to a sexually selected cue. Although I might be initially attracted to a woman with the ‘correct’ breast-to-waist-to-hip ratios, a symmetric face and all of the other hypothesised sexually selected cues, I will quickly learn if she is intelligent or not, if she is emotionally stable, and many other things that should be more important in my reproductive decisions than mere appearance. It is important to keep this in mind in any discussion of sexual selection. [...] The strongest evidence, so far, for intersexual selection of traits is observed in female primates, suggesting that male mate choice and female competition may be as important as male competition and female mate choice. [...] The data suggest that intersexual selection is as strong if not stronger on female primates than on males.” [As should be very clear at this point, male primates do have standards, despite what the third cartoon at the beginning of this post would have you believe...]
“One form of polyandry that has received much attention is extra-pair copulation (EPC) – sex that a female with a social mate has with a male who is not the social mate. [...] Because an evolved adaption is a product of past direct selection for a function, the question of whether EPC by women is currently adaptive or currently advances women’s reproductive success (RS) is a distinct one. An evolved adaption may be currently non-adaptive and even maladaptive because the current ecological setting in which it occurs is different from the evolutionary historical setting that was the selection favouring it [...] Female EPC is not a rare occurence in humans. [...] Female EPC may be a relatively common occurence now. But was it sufficiently common in small ancestral populations of humans or pre-hominid primates to be an effective selective force of evolution? Evidence suggests yes, and perhaps the best evidence comes from design features of men rather than women. Men, but not women, can be duped about parentage as a result of EPC, leading to the unknowing investment in another man’s offspring. Men show a rich diversity of mate guarding and anti-cuckoldry tactics ranging from sexual jealousy, vigilance, monopolising a mate’s time, pampering a mate, threatening a mate with harm if she shows interest in other men, and adjusting ejaculate size to defend against the mate’s insemination by a competitor [...] Some mate guarding tactics appear to be conditional, such that men guard mates of high fertility status (young or not pregnant) more intensely than ones of low-fertility status (older or pregnant) [...] and hence appear not to be caused by general male-male competitive strivings but rather concern for fidelity of a primary social mate [...] We [...] asked women in [a] study to report their primary mate’s mate-retention tactics. Our questionnaire measures two major dimensions, ‘proprietariness’ and ‘attentiveness’. Women reported their partners to be higher on both when fertile [i.e., mid-cycle].”
“Women’s preferences shift across the [menstrual] cycle in a number of ways. They particularly prefer the scent and faces of more symmetrical men when fertile. The face they find most attractive when fertile is more masculine than the face they most prefer when not fertile. They prefer more assertive, intrasexually competitive displays when fertile than when not. [An example: “The behaviours of men being interviewed by women for a lunch date were coded for a host of verbal and non-verbal qualities [by Gangestad et al.]. Through principal components analysis of these codes, two major dimensions along which men’s performance varied were identified; ‘social presence’, marked by a man’s composure, his direct eye contact and lack of downward gaze, as well as a lack of self-deprecation, and emphasis that he’s a ‘nice guy'; and ‘direct intrasexual competitiveness’, marked by a man’s explicit derogation of his competitor and statements to the effect that he is the better choice, as well as not being obviously agreeable.”] Furthermore, evidence indicates that their preferences when evaluating men as sex partners (i.e. their sexiness) is particularly affected; evidence shows that their evaluations of men as long-term partners shift little, if at all. [...] symmetrical men appear to invest less time in and are less faithful to their primary relationship partners [...] [The] pattern of findings suggests that it is not simply the case that all traits preferred by females are particularly preferred mid-cycle; that fertility status simply enhances existing preferences. Rather, it appears that only specific preferences are enhanced – perhaps those for features that ancestrally were indicators of genetic benefits. Preferences for features particularly important in long-term investing mates may actually be more prominent outside the fertile period.”
“STDs typically have been viewed as a curious group of parasites rather than established entities with important selective effects on their hosts [...]. In recent decades, this view has changed, primarily through our increased understanding of HIV [...] [There are] at least three major costs of STDs: (1) A large proportion of STDs increase the risk of sterility in males and females. (2) STDs commonly exhibit vertical transmission, with severe consequences for offspring health [see also this - Holmes et al. covers this stuff in some detail and actually the authors refer to an older version of that book in this context]. (3) Relative to infectious disease transmitted by non-sexual contact, STDs commonly exhibit long infectious periods with low host recovery, failure to clear infectious organisms following recovery, or limited immunity to reinfection. [...] Many negative consequences of STD infection probably provide benefits to the parasites themselves, increasing the likelihood of invasion, transmission and persistence [...] In mammals, for example, host infertility is likely to result in repeated cycling by females and may consequently increase their number of sexual contacts. [Mind blown! I'd never even thought about this.] Primates offer an important opportunity to test this hypothesis, because the frequency of infertile females within wild groups may exceed 10 per cent [...]. Similarly, STDs that increase host mortality or possess short infectious periods are less likely to survive until the next breeding season, when contact is established with new, uninfected hosts [...] Thus, in addition to long infectious periods, STDs tend to produce less disease-induced mortality relative to other infectious diseases”
“Because sexual reproduction offers an important mechanism for disease spread and may even be influenced by infection status, it is pertinent to ask whether animals can identify infected individuals and avoid mating with them. Symptoms such as visible lesions, sores, discharge around the genitalia or olfactory cues may provide evidence of infection. [...] many human STDs are [...] characterized by limited symptoms or, in the case of viruses, asymptomatic shedding [...] reproductive success of an STD is correlated with partner exchange and successful matings of infected hosts. Therefore, virulent parasites that produce outward signs of infection will experience decreased transmission because they provide conspicuous cues for choosy members of the opposite sex to avoid infected mates. [...] A parasite faces two main barriers, or defences, imposed by the host: behavioural counter-strategies to avoid exposure, and physical or immune defences [...]. The order of events can vary, but behavioural mechanisms commonly are viewed as the first line of defence. An important point we wish to emphasise is that host behaviour to avoid exposure prior to mating is likely to have other reproductive costs, and these costs may outweigh their benefits. [...] male and female behaviour indicates that STD risk is of secondary importance relative to other selective pressures operating on mating success. Females mate polyandrously to reduce infanticide risk [...] and, for similar reasons, they prefer novel males, though risking infection with STDs acquired from other social groups. Males prefer females of intermediate age that have already produced offspring, as these females have high reproductive value [...]. Both sets of decisions by males and females are expected to increase exposure to STDs by increasing the number of partners and mating events.”
I read the rest of the book, but I didn’t particularly like the last part either – I gave the book one star on goodreads. A couple of the last chapters were sort of okay, but they were not good enough for me to change my mind about the book in general.
“Whether the child who is suspected as being on the autism spectrum is “high” or “low” functioning, it is important to ascertain a sense of their global functioning particularly in the area of ADLs [activities of daily living]. Many professionals erroneously assume a “highfunctioning” individual with perhaps, Asperger syndrome functions in the community at an age appropriate level. Professionals are often beguiled by “Aspies” vocabularies and intelligence. They may think it is unnecessary to conduct an assessment of the person’s adaptive functioning because his or her IQ is in the normal to superior range. However, because of impairments in social communication, executive functioning, and the ability to read facial expressions and non-literal communication, individuals with Asperger syndrome can have extreme difficulty functioning on a daily basis. [...] many are diagnosed in adulthood [...]. For a community-based sample in Canada, 48 % of individuals with “high-functioning autism” and AS were not diagnosed until they were 21 years or older (Stoddart et al., 2013). [...] In higher-functioning individuals with average to above-average IQs, there is often a “cloak of competence” [...] or an assumption of competence when it comes to [...] basic life skills. Adults who are highly accomplished in their work may have basic problems with organization at home, or for tasks which are of less interest [...] outcome is variable, as some individuals with very high IQ face significant challenges in adulthood. [...] Depressed mood may be particularly common in high functioning adults, who have insight into their social and adaptive difficulties, and who may desire to make changes but have limited success in doing so”
“higher functioning individuals who have a considerable vocabulary may still have a [speech] disability. Individuals with Asperger syndrome for example have an impressive vocabulary even at a young age, but they have impairments in the semantics and pragmatics of speech. They may also have issues with prosody that need to be assessed and addressed.”
“Understanding which variables predict adult outcomes in ASD is a crucial goal for the field, but we know little about what variables predict different outcomes. [...] Longitudinal studies of ASD from childhood to adulthood have consistently yielded only two useful prognostic factors for adult outcome in ASD. A childhood IQ score in the near-average or average ranges (i.e., ≥70) and communicative phrase speech before 6 appear necessary but insufficient for a person to access a moderate level of independence in adulthood [...] Individuals having these childhood characteristics have widely varying adult outcomes, so exhibiting these characteristics in childhood is no guarantee that a person will achieve adult independence. The predictive utility of other childhood variables has been examined. Findings regarding severity ratings of childhood ASD symptoms have been mixed”
“Almost all studies that have examined developmental trajectories for individuals with ASDs show that these individuals exhibit reductions in autistic symptoms over time [...] Symptoms of ASD tend to diminish both in severity and number. The most improvement has usually been recorded for participants with IQ scores in the normal range and the least severe symptom presentations at their initial evaluation [...] Published reports also indicate that there are subgroups of individuals with ASD who experience marked change in the course of their development at some point, either as deterioration or dramatic improvement. [...] This phenomenon was [...] noted in a Japanese sample of 201 young adults, although marked improvement was also described [...]. Roughly one-third of this sample experienced a marked deterioration in behavior, most often occurring after age 10. The change occurred after age 20 in six cases. Declines were characterized by specific skill regressions or by increases in hyperactivity, aggression, destructiveness, obsessive behavior, or stereotyped behaviors. Notable improvements in the developmental course occurred in 43 % of the sample [...] Improvements occurred between the ages of 10 and 15 years for most participants. No predictable antecedents to changes in the developmental course have been noted in previous studies. [...] Significant improvements in verbal communication abilities have been reported on the ADI-R, although findings related to nonverbal communication have been mixed.”
“While not a core diagnostic domain at this time, difficulties processing sensory information are common in people with ASD and considered an associated feature [...]. In a study of sensory processing difficulties in 18 adults with ASD and normal-range IQ scores aged 18–65, Crane, Goddard, and Pring (2009) found that, compared to matched controls, adults with ASD reported more experiences with low registration (i.e., responding slowly or not noticing sensory stimuli), sensitivity to sensory input, and avoiding sensations. All but one person reported extreme scores in at least one area.”
“Outcome classifications usually include five nodes ranging from Very Poor (i.e., the person cannot function independently in any way) to Very Good (i.e., achieving great independence; having friends and a job). There is considerable variation in outcomes for samples studied, but in general outcome for approximately 60 % of individuals with ASD is considered Fair, Poor, or Very Poor [...] Most outcome studies indicate that few adults with ASD develop significant relationships
outside of their families of origin. [...] It is [however] likely that the majority of adults with ASD who also have normal range intellectual abilities have not been diagnosed, and many of these individuals may have married or developed other close relationships outside of their families of origin. [...] In terms of outcome studies to date, very few adults with ASD have been reported to have successful, long-term romantic relationships [...]. Some outcome studies indicate that no participants or only one participant has been involved in a romantic relationship [...]. One-third to half of adults in outcome studies have friendships outside of their families [...]. Almost 75 % of family members reporting on the sample described by Eaves and Ho (2008) indicated that they enjoyed good to excellent relationships with their relative with ASD. Similar results have been found in other studies of adults with A[S]D [...]. Females have reportedly experienced greater success with peer relationships than males [...]. Between 10 and 30 % of adults in recent studies (Eaves & Ho, 2008; Engstrom et al., 2003; Farley et al., 2009) had experience in a romantic relationship. [...] Roughly one-third of adults with normal-range IQ scores in outcome studies are employed, inclusive of regular, full-time work, part-time or volunteer work, supported employment, and sheltered employment. [...] Roughly 40 % of participants in outcome studies have been prescribed medications for psychiatric conditions or to control behavior”
“Ganz (2007) estimated the societal costs of ASD across the lifespan, calculating a total per capita societal cost for an individual with ASD at over $3 million. The most expensive period was early childhood, at which time many children are undergoing diagnostic studies, receiving medical treatments, and participating in intensive intervention programs. Costs for young adults (ages 23 through 27) were estimated to be $404,260 (in 2003 dollars). Ganz calculated direct medical, direct nonmedical, and indirect costs. Direct medical costs are expenses incurred in the course of medical care, and these tended to be lower for adults than other age groups. Direct nonmedical costs include adult support services and employment services, as examples. This life period often involves much trial-and-error while families attempt to identify services that will result in a good fit with their son or daughter. Direct nonmedical costs were higher in this age group than any other, estimated at $27,539 over this 5-year period. Indirect costs mainly comprise lost productivity costs, as when family members must leave work or reduce hours in order to support their family member. These costs are also the highest for this age group because adult children may remain dependent on their parents, but are also technically old enough to enter the workforce. Therefore, lost productivity costs are calculated for parents as well as the young adult, a phenomenon unique to this life period. Societal costs diminish as adults with ASD age, so that someone aged 48 through 52 incurs less than half the cost of someone aged 23 through 27.”
“Virtually nothing is known about ASDs in late life. Questions at the forefront about this period are related to brain changes, the transfer of care from parents to other family members or human service agencies as parents become unable to care for their adult children through illness or death, and the adequacy of existing services to care for this population. There are many questions about the nature of changes in the brain in adults ASD as they approach old age. We do not know whether or not they experience memory problems at a similar rate to adults in general population, nor whether they may experience earlier onset of dementia and increased rates of dementia”
Chapter 15 was actually sort of okay and deals with a community survey conducted in Britain where some researchers tried to figure out how many autistics are out there, undiagnosed, in the community, and how well those individuals are actually doing compared to other people. Instead of going into the details of that chapter I’ll just point you to the original paper they describe in the text. The paper behind chapter 16, which I’ll talk a little bit about below, is available here.
“While evidence is accumulating regarding the benefits of psychosocial interventions for adults with ASD, there have been no systematic reviews or meta-analyses conducted to summarize the cumulative evidence base for these approaches. Therefore, we conducted a systematic review to examine the evidence base of psychosocial interventions for adults with ASD [...] An extensive literature search was conducted in order to locate published studies documenting interventions for adults with ASD [...] These searches revealed 1,217 published reports. Additionally, references of relevant studies were examined for additional studies to be included in this research. [...] From these abstract searches, studies were then examined and included in this review if they (1) were conducted using a single case study, noncontrolled trial, non-randomized controlled trial, or RCT design that reported pretest and post-test data, (2) reported quantitative findings, (3) included participants ages 18 and older, and (4) included participants with ASD. In total, 13 studies assessing psychosocial interventions for adults with ASD were found. [...] The included studies were diverse in their methodologies and represented numerous categories of interventions. A total of five were single case studies, four were RCTs, three were non-randomized controlled trials, and one was an uncontrolled pre–post trial. Six studies evaluated the efficacy of social cognition training, five studies evaluated the efficacy of applied behavior analysis (ABA) techniques, and two studies evaluated the efficacy of other types of communitybased interventions. [...] All of the included ABA studies were single case studies. [...] All ABA studies reported positive benefits of treatment, although the maintenance of this benefit varied between studies. Effect size was not reported for the ABA studies, as findings were based on a single subject. [...] As a whole, the studies identified had modest sample sizes, with the greatest including 71 participants and over three-quarters of studies having less than 20 participants.”
“there are significant limitations to the current evidence base. While we conducted an extensive search of the literature available on psychosocial interventions for adults with ASD since 1950, only 13 studies were found. Due to the small number of studies, we were unable to conduct a meta-analysis of the adult ASD literature. As a consequence, clear estimates of effect size for different types of psychosocial interventions are not available. Effect sizes should also be interpreted with caution, especially for studies with small sample sizes, which comprised the majority of studies. The incongruent nature of outcome measures used in some of the included studies also indicate that the reader should take caution before generalizing the results of included studies. For instance, García-Villamisar & Hughes (2007) used cognitive functioning outcomes, such as the Stockings of Cambridge and Big Circle/Little Circle tasks, to measure the effectiveness of a supported employment program but did not report outcome data on the number of adults with ASD who were employed as a result of the program.”
Some general comments about the book can be found in my first post about it. I don’t have a lot of other things to add, but I do want to cover some more stuff from the book. Below some observations from the chapters about human reproduction, as well as some stuff on energy homeostasis.
“The ovum and sperm pronuclei fuse to form the zygote, which [...] has the normal diploid chromosomal number [...]. The zygote divides mitotically as it travels along the uterine tube, and at about 3 days after fertilization enters the uterus, when it is now a morula. The cells of the morula continue to divide to form a hollow sphere, the early blastocyst, consisting of a single layer of trophoblast cells and the embryoblast, an inner core of cells which will form the embryo. The trophoblast, after implantation, will form the vascular interface with the maternal circulation. After around 2 days in the uterus, the blastocyst is accepted by the endometrial epithelium under the influence of estrogens, progesterone and other endometrial factors. This embedding or implantation process triggers the ‘decidual response’, involving an expansion of a space, the decidua, to accommodate the embryo as it grows. The invasive trophoblast proliferates into a protoplasmic cell mass called a syncitiotrophoblast, which will eventually form the uteroplacental circulation. By about 10 days, the embryo is completely embedded in the endometrium.
If the ovum is fertilized and becomes implanted, the corpus luteum does not regress, but continues to secrete progesterone, and within 10–12 days after ovulation the syncitiotrophoblast begins to secrete human chorionic gonadotrophin (hCG) into the intervillous space. Most pregnancy tests are based on the detection of hCG, which takes over the role of luteinizing hormone (LH) and stimulates the production of progesterone, 17-hydroxyprogesterone and estradiol by the corpus luteum. Plasma levels of hCG reach a peak between the ninth and fourteenth week of pregnancy, when luteal function begins to fade, and by 20 weeks, both luteal function and plasma hCG have declined.
The syncitiotrophoblast secretes another hormone, human placental lactogen (hPL) [...]. Its function may be to inhibit maternal growth hormone production, and it has several metabolic effects, notably glucose-sparing and lipolytic, possibly through its anti-insulin effects. [...] The corpus luteum synthesizes relaxin, which relaxes the uterine muscle [...] Progesterone concentrations rise progressively during pregnancy, and a major function of the hormone is thought to be its action, together with relaxin, to inhibit uterine motility, partly by decreasing its sensitivity to oxytocin [...] A[n] important role of estrogens is to stimulate the steady rise in maternal plasma prolactin. Prolactin [...] is the postpartum lactogenic hormone [...] The placenta, which takes over the production of the hormones of pregnancy from the corpus luteum, is part of what is termed the fetoplacental unit. The placenta attains its mature structure by the end of the first trimester of pregnancy. [...] The placenta is not only an endocrine organ, but also provides nutrients for the developing fetus and removes its waste products. [...] The placenta lacks 17-hydroxylase and therefore cannot produce androgens. This is done by the fetal adrenal glands, and the androgens thus formed are the precursors of the estrogens. The placenta converts maternal and fetal dehydroepiandrosterone sulphate (DHEA-S) to testosterone and androstenedione, which are aromatized to estrone and estradiol. Another enzyme lacking in the placenta is 16-hydroxylase, so the placenta cannot directly form estriol and needs DHEA-S as substrate.”
“Normal fertility in the male is produced by a complex interaction between genetic, autocrine, paracrine and endocrine function. The endocrine control of reproductive function in the male depends upon an intact hypothalamo–pituitary– testicular axis. The testis has a dual role – the production of spermatozoa and the synthesis and secretion of testosterone needed for the development and maintenance of secondary sexual characteristics and essential for maintaining spermatogenesis. These functions in turn depend upon the pituitary gonadotrophin hormones: luteinizing hormone (LH; required to stimulate testicular Leydig cells to produce testosterone); and follicle stimulating hormone (FSH; required for the development of the immature testis and a possible role in adult spermatogenesis). Gonadotrophin production occurs in response to stimulation by hypothalamic GnRH. Testosterone exerts a negative feedback on the secretion of LH and FSH and the hormone inhibin-β, also synthesized by the testis, has a specific regulatory role for FSH.”
(A thought which occurred to me while reading these sections of the book: ‘It is fortunate that living organisms do not need to understand in detail how their reproductive systems work in order to have offspring…’)
“The term ‘functional disorders’ is used to describe a group of conditions [disorders of reproductive function in females] in which there are no structural or endocrine synthetic abnormalities in the pituitary–ovarian axis. Hypothalamic amenorrhoea is usually associated with weight-reducing diets, often with excess exercise [...] It is the commonest cause of secondary amenorrhoea seen in endocrine clinics. Although a reduction in weight to 10% below ideal body weight is usually associated with amenorrhoea, there is wide variation between women. Changes in body composition, particularly reduced fat mass, are crucial to the characteristic hypothalamic changes of impaired GnRH secretion, loss of gonadotrophin pulsatility and subsequent hypogonadotrophic hypogonadism [...]. The treatment of weight- and exercise-related amenorrhoea is specifically weight gain and reduction in exercise. [...] Untreated, hypothalamic amenorrhoea is associated with reduced bone mineral density and ultimately osteoporosis. Women with long-term hypoestrogenaemia should have their bone density recorded and, if there is significant osteopaenia or osteoporosis, combined estrogen/progesterone replacement therapy should be considered.”
“In birds and mammals, testosterone sexually differentiates the fetal brain. The fetal brain contains androgen and estrogen receptors, which mediate these actions of testosterone. [...] there is evidence that testosterone causes changes in the fetal brain during sexual differentiation of the brain at about 6 weeks.”
“The precise nature of the influence of testosterone on behaviour is unknown, due in part to the limitations of methods of study. In humans, there is no apparent relationship between plasma levels of testosterone and sexual or aggressive behaviour. It seems that behaviour has a powerful influence on testosterone production, since stress drives it down, as does depression and threatening behaviour from others. In captive primate colonies, subordinate males have raised prolactin and very much reduced plasma levels of testosterone.”
“About 120 million sperm are produced each day by the young adult human testis. Most are stored in the vas deferens and the ampulla of the vas deferens, where they can remain and retain their fertility for at least 1 month. While stored, they are inactive due to several inhibitory factors, and are activated once in the uterus. In the female reproductive tract, sperm remain alive for 1 or 2 days at most.”
“Blood pressure is raised (i) when the heart beats more powerfully (positive inotropic effect); (ii) when arterioles constrict, increasing the peripheral resistance; (iii) when fluid and salts are retained; and (iv) through the influence of cardiovascular control centres in the brain, or a combination of two or more of these factors.”
“In recent years, adipose tissue has become recognized as a highly metabolically active organ. [For one example of a publication going into much more detail about these things, specifically in the context of cancer, see incidentally Kolonin et al.] [...] The neuroendocrine system plays a critical role in energy metabolism and homeostasis and is implicated in the control of feeding behaviour [...and for much more about this, see Redline and Berger's book.] [...]. Fats are the main energy stores in the body. Fats provide the most efficient means of storing energy in terms of kJ/g, and the body can store seemingly unlimited amounts of fat [...]. Carbohydrate constitutes <1% of energy stores, and tissues such as the brain are absolutely dependent on a constant supply of glucose, which must be supplied in the diet or by gluconeogenesis. Proteins contain about 20% of the body’s energy stores, but since proteins have a structural and functional role, their integrity is defended, except in fasting, and these stores are therefore not readily available. Circulating glucose can be considered as a glucose pool [...], which is in a dynamic state of equilibrium, balancing the inflow and outflow of glucose. The sources of inflow are the diet (carbohydrates) and hepatic glycogenolysis. The outflows are to the tissues, for glycogen synthesis, for energy use, or, if plasma concentrations reach a sufficient level, into the urine. This level is not usually reached in normal, healthy people. [...] Regulation of the glucose flows is through the action of endocrine hormones, these being epinephrine, growth hormone, insulin, glucagon, glucocorticoids and thyroxine. Insulin is the only hormone with a hypoglycaemic action, whereas all the others are hyperglycaemic, since they stimulate glycogenolysis. [...] Integration of fat, carbohydrate and protein metabolism is essential for the effective control of the glucose pool. Two other pools are drawn upon for this, these being the free fatty acid (FFA) pool and the amino acid (AA) pool [...] The FFA pool comprises the balance between dietary FFA absorbed from the GIT [gastrointestinal tract], FFA released from adipose tissue after lipolysis, and FFA entering the metabolic process. Insulin drives FFA into storage as lipids, while glucagon, growth hormone and epinephrine stimulate lipolysis [breakdown of fats]. The AA pool in the bloodstream comprises the balance between protein synthesis and the entry of amino acids into the gluconeogenic pathways.”
“In humans, food intake is determined by a number of factors, including the peripheral balance between usage and storage of energy, and by the brain, which through its appetite and satiety centres can trigger and terminate feeding behaviour [...]. Leptin is secreted by human adipocytes but it may be more important (in the human) in the long-term maintenance of adequate energy stores during periods of energy deficit, rather than as a short-term satiety hormone. Feeding behaviour in humans can be initiated and sustained not only through hunger, but also through an awareness of the availability of especially palatable foods and by emotional states; the central mechanisms underlying this behaviour are poorly understood.”
It’s been a long time since I had one of these.
Some random stuff I’ve come across:
i. Reviews of Anything. Some pretty funny stuff there. Examples include: Our solar system: 1 star. Reviews of this review. The 5 star Rating System: 9/10. Obese Americans, 1 out of 4. Spell Checker: 1 satr.
iii. The Bad Writing Contest. A quote from the link:
“The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.”
In contexts where you socialize with people who write that way, dumbpiphanies may happen.
v. I’m not actually sure I liked this lecture very much (I was very much annoyed by the word ‘cristal’ in the slides in the last part of the lecture; he repeatedly misspells the word crystal in the slides. I find that kind of sloppiness irritating, because I tend to use the existence of spelling errors in lecture notes/slides in mathematical lectures as what might be termed a caution heuristic; if the lecturer did not bother to correct spelling errors, I figure he probably also didn’t bother to correct other errors in the slides – and if you start to think along the way that there might be errors in the slides, a lecture to me becomes less enjoyable to watch, especially when the lecture deals with complicated stuff which is hard enough to follow as it is), but I figured I might as well share it anyway:
vi. arXiv vs snarXiv.
I’ve been reading and I have at this point almost finished Sexual Selection in Primates: New and Comparative Perspectives, an awesome book which I’ll certainly give five stars and most likely add to my goodreads favourites, yet now I find myself covering this borderline-lousy book instead. The reason why I’m doing that is simple: Kappeler & van Schaik’s book (the awesome one) is an oldfashioned paper book, which makes it a lot more work to blog than this one.
I’m not impressed with Volkmar et al.’s coverage, and I’m at the moment considering whether I should even finish it – thus the question mark in the parenthesis; if I don’t read any more of it, I’ll certainly not post another post about the book later. It is a rare thing for me to do, to stop reading a book once I’ve decided that it’s worth reading; I’ll often quite quickly get a sense of whether a book is worth reading or not, and I rarely give up on a book once I’m past the first 100 pages. I did not at any point think this book was awesome and I’ve throughout the book so far been somewhere between one and two stars on goodreads, but it’s one of very few books dealing with the topic covered so it’s not like there are a lot of alternatives out there, and this gave the authors some leeway they would not otherwise have had. The most recent chapter I read, on Pharmacotherapy of Behavioral Symptoms and Psychiatric Comorbidities in Adolescents and Adults with Autism Spectrum Disorders, however was very close to pushing me over the limit. Here are some sample quotes from that chapter:
“A case report described a 25-year-old male with Asperger’s disorder who was diagnosed with bipolar I disorder with psychotic features after exhibiting a period of hyperactive, irritable, and assaultive behavior, reduced need for sleep, and grandiose, persecutory delusions (Arora, Praharaj, Sarkhel, & Sinha, 2011). Symptoms of mania improved on a combination of clozapine 200 mg/day and haloperidol 20 mg/ day”
“Clozapine is an atypical antipsychotic that is limited in use due to an increased risk of agranulocytosis and potential to lower the seizure threshold. A case series in three individuals with autism, aged 15-, 17-, and 27-years-old, highlighted successful treatment with clozapine for the management of recurrent aggression toward self and others (Chen, Bedair, McKay, Bowers, & Mazure, 2001; Gobbi & Pulvirenti, 2001; Lambrey et al., 2010).”
“A case series examined this glutamate antagonist in the treatment of three individuals with autism and MR, aged 15–20 years (Wink, Erickson, Stigler, & McDougle, 2011). Dosages ranged from 100 to 200 mg/day. There were reductions in interfering repetitive behaviors in all three subjects.”
“There is one double-blind, placebo-controlled, crossover study of clonidine involving adolescents and adults in the treatment of “hyperarousal behaviors” associated with autism (Fankhauser, Karumanchi, German, Yates, & Karumanchi, 1992). [...] This study examined nine males with autism, aged 5–33 years (mean age, 13 years) [...] Transdermal clonidine resulted in significant clinical improvement on the Clinical Global Impression-Improvement (CGI-I) scale.”
“The majority of published research on the pharmacological treatment of comorbid psychiatric disorders is limited to case reports and a few open-label studies. [...] Although case reports have found some pharmacologic treatment beneficial for psychiatric comorbidities in individuals with ASDs, double-blind, placebo-controlled studies are needed”
One of the authors of that chapter had multiple conflicts of interest which were disclosed towards the end of the chapter, as he’s apparently received research funds from six different pharmaceutical companies. In the chapter they cover studies involving three individuals and even single case stories like the one above (it’s far from the only one). I’m not sure those two observations are unrelated. I think if all you have is a case series involving 3 individuals it’s probably not necessary to include that stuff in a book like this; that sort of information is close to worthless (single case stories certainly are. That’s what we in other areas call ‘anecdotes’). Once you know that some of the randomized controlled trials in this field involve only 9 subjects, you also start becoming a lot less impressed with those. They make conclusions in that chapter which I would not have made based on the evidence they present. A really serious omission is also that polypharmacy is not even mentioned in the chapter, despite a substantial number of people taking more than one type of drug and despite the fact that we basically have no clue how this affects them (see Lubetsky et al. for more on this). This omission is so much more striking as the polypharmacy issue is actually brought up, or at least mentioned, already in the introductory chapter of the book, where the authors note that “Of those receiving medications [in one study of 480 Canadians from Ontario with ASD], over 80 % received more than one medication.”
One reason why I’ve been reading on is that occasionally there are some interesting data, but another reason is that this book probably provides a good illustration of how people working in this field thinks. They, incidentally much like the authors in Lubetsky et al., seem to think there’s no budget constraint – they worry about it to the extent that they’re not given money to spend, but they seem to have no notion of the existence of questions like: ‘but isn’t it simply insane to be spending that kind of money on this?’ They have a lot of ideas about how you could improve outcomes, and I’m sure some of those ideas if implemented might improve outcomes – whether it would be ‘worth it’ is however a completely different question, a question they do not ask.
I have added some observations from the first half of the book below.
“adolescence and adulthood in ASD [autism spectrum disorder] remain rather poorly understood. Much of the research and clinical work has centered on young children and those of primary school age [...] We now understand that ASD is an early-emerging, usually lifelong neurodevelopmental disorder that significantly impact social, communicative, cognitive, and adaptive skills and has a strong genetic basis [...]. There is now an extensive literature of peer-reviewed, scientific papers focused on ASD, and multiple studies on adult outcome have been published (for a review, see Howlin, 2013). As Howlin notes, however, these focus almost exclusively on outcome in young adulthood and information on older individuals is limited [...], with almost no research focused on aging [...]. Of the studies focused on outcome, most have studied individuals with classic ASD, or “Kanner’s” autism and “outcome” is essentially confined largely to early adulthood. [...] heterogeneity in both the disorder itself and in service delivery, render it a complicated landscape for the study of intervention and outcome [...]. Changes in nomenclature and diagnostic taxonomy have also complicated interpretation of research over the years and of identification of older individuals on the autism spectrum [...] The relationship between severity of early symptoms of autism and ultimate outcome remains unclear, with at least a few studies suggesting that the severity of social skills impairment is the most significant outcome predictor [...]. This and many other questions regarding changes in outcome remain to be discovered. Past young adulthood the literature becomes quite sparse. In one review of autism, research studies conducted between 2000 and 2010 only 23 (of an estimated 11,000) were focused on adult services [...] Despite the important limitations of the research literature, it does appear that on balance outcome has, and is continuing, to improve.”
“The data available suggest that most individuals as adults live with parents/family and that a minority is employed. [...] Even for the most able adults, however, limitations in social interaction, in adaptive/daily life skills, and occupational status [are] striking, with nearly 60 % of cases [in the previously mentioned Canadian study] continuing to live with their families. About one third of [that] sample had had romantic relationships and in a few cases had been married (sometimes with offspring). Most of the sample reported major limitations in social connections (with many having one or fewer social encounters outside their living situation each month). [...] Outcome studies have shown a wide range of variation in the number of individuals with autism who have left home to live independently, in a group home, or some supported living arrangement. A number of studies have shown that the majority of young adults continue to live at home with their parents. [...] Even when adults with autism live outside the family, their families especially their mothers have extensive contact and involvement in their care. Kraus et al. (2005) reported that 50% of families visited their adult with autism at least weekly and an equal number of adults came weekly to visit at their mother’s home. [...] This need for continued parental support crosses the entire spectrum of individuals with autism.”
“Recent reviews of outcomes for individuals with ASD through the National Longitudinal Transition Study-2 (NTLS2) have indicated that, as a group, individuals with ASD have low rates of employment, independent living, and lifelong friendships (Newman, Wagner, Cameto, & Knokey, 2009). This longitudinal study followed 11,000 transition-aged students with disabilities from 2001 to 2009. The age range of youth and young adults included in this study were between 13 and 26. This sample included 922 students with autism spectrum disorders. Outcomes recorded for this sample included the following findings [...]: • 32 % of this sample attended post-secondary education of one type or another • Only 6 % achieved competitive employment • 21 % had no job or post-secondary education experiences at all • 80 % continued to live with their parents • 40 % reported having no friends”
“The relationship between typically developing siblings has been extensively studied; however, very few studies have been conducted in order to investigate the interactions and quality of relationship between siblings when one has autism. For a typically developing sibling, the influence of having a brother or sister with autism is associated with higher rates of behavioral and emotional concerns [...] and fewer prosocial behaviors towards their sibling with autism in some studies [...] Within the literature, higher levels of education of the typically developing sibling as well as living at a distance from their sibling with autism have negative consequences for their perceptions of the sibling bond”
“There is limited research on the spouses of individuals with ASD. With the increasing knowledge and identification of high functioning men and women, there is a growing awareness that a portion of adults with ASD do enter into long-term relationships with others [...]. However, there is limited empirical data about the nature of these relationships. [...] adolescents and adults with ASD have far fewer sexual experiences than their typically developing peers [...] Nichols and Byers (2008) found that participants who were older and had fewer ASD symptoms reported better sexual functioning. Specifically, individuals with fewer ASD symptoms reported greater sexual satisfaction, sexual self-esteem, assertiveness, arousability, and desire. They also reported fewer sexual problems and less anxiety surrounding sexual issues. As such, a sizable population of individuals with ASD is capable of having a satisfying sex life.”
“An impairment in social interaction is a core symptom of ASD [...] and can impact social communication, friendship-making, dating, relationship-building, as well as sexuality. These deficiencies can lead to a decrease in social relationships [...], an increase in loneliness [...], an increase in social isolation, and poor quality friendships [...]. Furthermore, co-morbid diagnoses of other mental disorders are common among this population. Individuals with Asperger’s are 5.7 times more likely to develop symptoms of depression in comparison to the typically developing population (McHale, Dariotis, & Kauh, 2003; Stewart et al., 2006). The literature suggests that most individuals with ASDs show a desire for relationships, but experience loneliness because their difficulties with social skills often interfere with friendship formation [...] in a study of “high-functioning” individuals with autism, more than 56% had never experienced a sexual relationship and only 25% had dated [...] as a whole, studies repeatedly show that although individuals with ASD desire intimate relationships, few actually have them. [...] [People] with ASD often lack the social skills knowledge and competence to appropriately pursue and engage in successful romantic relationships [...]. For example, individuals with ASD have been known to naively behave in an intrusive manner with potential romantic partners, which may even be perceived as stalking behavior”
“Despite the pervasiveness of social deficits commonly experienced among individuals with ASD, social skills are comparatively much less studied than other aspects of ASD and research examining social skills interventions for adolescents and adults with ASD are especially rare. In a best evidence synthesis of 66 studies of social skills interventions for individuals with ASD published between 2001 and 2008, only three studies contained adolescent or adult participants [...] Social deficits are typically a major source of impairment for individuals with ASD, regardless of cognitive or language ability [...]. However, the considerable heterogeneity in the level of cognitive functioning and language ability among individuals with ASD may affect the presentation of social deficits. For example, Bauminger, Shulman, and Agam (2003) found that higher-functioning adolescents initiate social interaction with peers more frequently than do their lower-functioning peers; yet, their interactions are often awkward and sometimes even intrusive or offensive. [...] high functioning adolescents may be no less affected by social deficits than those with cognitive limitations; rather, their heightened self-awareness and false appearance of being less impaired may actually increase the severity of their social limitations and motivation, perhaps increasing the likelihood of peer rejection and neglect. Consequences of poor social skills often manifest in the form of peer rejection, peer victimization, poor social support, and isolation. Consequently, individuals with ASD generally report higher levels of loneliness and poorer quality of friendships”
“adults with ASD often present with more depression and anxiety than their adolescent counterparts [...]. Interestingly, higher-functioning adults with greater intelligence and less autistic symptomatology tend to experience more depression, anxiety, social isolation, withdrawal, and peer victimization [...] than lower-functioning individuals. This may be due in part to greater social expectations often placed on higher-functioning adults occurring as a result of placement in less protective and more inclusive settings. With higher-functioning adults with ASD often giving the appearance of seeming more “odd” than disabled by their peers, these individuals may be more susceptible to peer rejection, and consequently greater negative socio-emotional outcomes like depression and anxiety. Furthermore, greater self-awareness about peer rejection and “differentness” more likely found in higher-functioning adults with ASD may also contribute to greater depression and anxiety”
“While social skills training has been utilized for decades and is not a particularly unique or novel treatment for individuals with ASD, the research literature suggests that these approaches have not been tremendously effective in improving the social functioning of individuals on the autism spectrum [...] While social skills training has increasingly become a popular method for helping individuals with ASD adapt to their social environment [...], a review of the research literature suggests there are very few evidence-based social skills interventions for adolescents and adults with ASD [...]. With emphasis on early intervention, most social skills treatment studies have targeted younger children on the autism spectrum, with few clinical research trials focusing on adolescents or adults with ASD. Among the limited number of social skills intervention studies conducted with this population, most have not been formally tested in terms of their efficacy in improving social competence or the development of close friendships, nor do they examine the maintenance of treatment gains months or years after the intervention has ended. [...] the literature on social skills training for youth with ASD has been far from encouraging. In a review of the social skills treatment literature, White et al. (2007) identified 14 studies that used group-based social skills training for children and adolescents with ASD. Among these studies, only one used a randomized control group design [...] None of these studies examined the maintenance or trajectory of improvement in social competency over time [...] Even fewer studies have focused on social skills treatment for adults with ASD. To date, only three published studies appear to have tested the effectiveness of a social skills intervention for adults with ASD [...] Only 4 of the 14 studies White et al. (2007) included in their review employed a RCT with a control group. In a similar review of social skills training interventions for children and adolescents with ASD, Rao et al. (2008) found that 9 out of 10 reviewed studies did not use a RCT as their research design. [...] Regrettably, most social skills intervention studies are limited in their ability to generalize research findings to other settings and other populations of adolescents and adults with ASD. Two of the biggest offenders to generalization relate to sample size and participant characteristics. Most social skills training intervention studies for adolescents and adults with ASD have small sample sizes [...] single-case experimental designs with approximately three or four participants appear to be the most common research design employed within social skills training studies”
i. “Children are like men, the experience of others does not help them.” (Alphonse Daudet)
ii. “Men grow old, but they do not ripen.” (-ll-)
iii. “Ingratitude calls forth reproaches as gratitude brings renewed kindnesses.” (Marie de Rabutin-Chantal, marquise de Sévigné)
iv. “There is no person who is not dangerous for some one.” (-ll-)
v. “Whatever is worth doing at all, is worth doing well.” (Philip Stanhope, 4th Earl of Chesterfield)
vi. “Idleness is only the refuge of weak minds.” (-ll-)
vii. “Abject flattery and indiscriminate assentation degrade, as much as indiscriminate contradiction and noisy debate disgust. But a modest assertion of one’s own opinion, and a complaisant acquiescence in other people’s, preserve dignity.” (-ll-)
viii. “Knowledge may give weight, but accomplishments give luster, and many more people see than weigh.” (-ll-)
ix. “Let blockheads read what blockheads wrote.” (-ll-)
x. “The moral of the story of the Pilgrims is that if you work hard all your life and behave yourself every minute and take no time out for fun you will break practically even, if you can borrow enough money to pay your taxes.” (Will Cuppy)
xi. “The Bayeux Tapestry is accepted as an authority on many details of life and the fine points of history in the eleventh century. For instance, the horses in those days had green legs, blue bodies, yellow manes, and red heads, while the people were all double-jointed and quite different from what we generally think of as human beings.” (-ll-. Now I’d sort of wish we’d had someone like Cuppy show us around back when I saw the Bayeux Tapestry many years ago…)
xii. “In some respects, Nero was ahead of his time. He boiled his drinking water to remove the impurities and cooled it with unsanitary ice to put them back in. He renamed the month of April after himself, calling it Neroneus, but the idea never caught on because April is not Neroneus and there is no use pretending that it is. During his reign of fourteen years, the outlying provinces are said to have prospered. They were farther away.” (-ll-)
xiii. “[Alexander the Great] was often extremely brutal to his captives, whom he sold into slavery, tortured to death, or forced to learn Greek.” (-ll-)
xiv. “People talk vaguely about the innocence of a little child, but they take mighty good care not to let it out of their sight for twenty minutes.” (Saki)
xv. “It occurred to me that I would like to be a poet. The chief qualification, I understand is that you must be born. Well, I hunted up my birth certificate, and found that I was all right on that score.” (-ll-)
xvi. “The sacrifices of friendship were beautiful in her eyes as long as she was not asked to make them.” (-ll-)
xvii. “Wisdom cannot prevent a fall, but may cushion it.” (Mason Cooley)
xviii. “I am easy-going right up to the borders of my self-interest.” (-ll-)
xix. “Scepticism is always a back road leading to some credo or other.” (-ll-)
xx. “In an aphorism, aptness counts for more than truth.” (-ll-)
This book is another publication from the 100 Cases … series which I’ve talked about before – I refer to these posts for some general comments about what this series is like and some talk about the other books in the series which I’ve read. The book is much like the others, though of course the specific topics covered are different in the various publications. I liked this book and gave it 3 stars on goodreads. The book has three sections: a section dealing with ‘chemical pathology, immunology and genetics'; a section dealing with ‘histopathology'; and a section dealing with ‘haematology’. As usual I knew a lot more about some of the topics covered than I did about some of the others. Some cases were quite easy, others were not. Some of the stuff covered in Greenstein & Wood’s endocrinology text came in handy along the way and enabled me for example to easily identify a case of Cushing’s syndrome and a case of Graves’ disease. I don’t think I’ll spoil anything by noting that two of the cases in this book involved these disorders, but if you plan on reading it later on you may want to skip the coverage below, as I have included some general comments from the answer sections of the book in this post.
As someone who’s not working in the medical field and who will almost certainly never need to know how to interpret a water deprivation test (also covered in detail in Greenstein and Wood, incidentally), there are some parts of books like this one which are not particularly ‘relevant’ to me; however I’d argue that far from all the stuff included in a book like this one is ‘stuff you don’t need to know’, as there are also for example a lot of neat observations included about how specific symptoms (and symptom complexes) are linked to specific disorders, some related ideas about which other medical conditions might cause similar health problems, and which risk factors are potentially important to have in mind in specific contexts. If you’ve had occasional fevers, night sweats and experienced weight loss over the last few months, you should probably have seen a doctor a while ago – knowledge included in books like this one may make the reader perhaps a bit less likely to overlook an important and potentially treatable health problem, and/or increase awareness of potential modifiable risk factors in specific contexts. A problem is however that the book will be hard to read if you have not read any medical textbooks before, and in that case I would probably advise you against reading it as it’s almost certainly not worth the effort.
I have added a few observations from the book below.
“After a bone marrow transplant (and any associated chemotherapy), the main risks are infection (from low white cell counts and the use of immunosuppressants, such as cyclosporin), bleeding (from low platelet counts) and graft versus host disease (GVHD). [...] An erythematous rash that develops on the palms or soles of the feet of a patient 10–30 days after a bone marrow transplant is characteristic of GVHD. [...] GVHD is a potentially life-threatening problem that can occur in up to 80% of successful allogeneic bone marrow transplants. [...] Clinically, GVHD manifests like an autoimmune disease with a macular-papular rash, jaundice and hepatosplenomegaly and ultimately organ fibrosis. It classically involves the skin, gastrointestinal tract and the liver. [...] Depending on severity, treatment of acute GVHD may involve topical and intravenous steroid therapy, immunosuppression (e.g. cyclosporine), or biologic therapies targeting TNF-α [...], a key inflammatory cytokine. [...] Prognosis is related to response to treatment. The mortality of patients who completely respond can still be around 20%, and the mortality in those who do not respond is as high as 75%.”
“The leading indication for a liver transplant is alcoholic cirrhosis in adults and biliary atresia in children. [...] The overall one-year survival of a liver transplant is over 90%, with 10-year survival of around 70%. [...] Transplant rejection can be classified by time course, which relates to the underlying immune mechanism: • Hyperacute organ rejection occurs within minutes of the graft perfusion in the operating theatre. [...] The treatment for hyperacute rejection is immediate removal of the graft. • Acute organ rejection take place a number of weeks after the transplant [...] The treatment for acute rejection includes high dose steroids. • Chronic organ rejection can take place months to years after the transplant. [...] As it is irreversible, treatment for chronic rejection is difficult, and may include re-transplantation.”
“Chronic kidney disease (CKD) is characterized by a reduction in GFR over a period of 3 or more months (normal GFR is >90–120 mL/min). It arises from a progressive impairment of renal function with a decrease in the number of functioning nephrons; generally, patients remain asymptomatic until GFR reduces to below 15 mL/min (stage V CKD). Common causes of CKD are (1) diabetes mellitus, (2) hypertension, (3) glomerulonephritis, (4) renovascular disease, (5) chronic obstruction or interstitial nephritis, and (6) hereditary or cystic renal disease”
“The definition of an aneurysm is an abnormal permanent focal dilatation of all the layers of a blood vessel. An AAA [abdominal aortic aneurysm] is defined when the aortic diameter, as measured below the level of the renal arteries, is one and a half times normal. Women have smaller aortas, but for convenience, more than 3 cm qualifies as aneurysmal. The main risk factors for aneurysm formation are male gender, smoking, hypertension, Caucasian/European descent and atherosclerosis. Although atherosclerosis is a risk factor and both diseases share common predisposing factors, there are also differences. Atherosclerosis is primarily a disease of the intima, the innermost layer of the vessel wall, whereas in aneurysms there is degeneration of the media, the middle layer. [...] The annual risk of rupture equals and begins to outstrip the risk of dying from surgery when the aneurysm exceeds 5.5 cm. This is the size above which surgical repair is recommended, comorbidities permitting. [...] Catastrophic rupture, as in this case, presents with hypovolaemic shock and carries a dismal prognosis.” [The patient in the case history died soon after having arrived at the hospital]
“Stroke refers to an acquired focal neurological deficit caused by an acute vascular event. The neurological deficit persists beyond 24 hours, in contrast to a transient ischaemic attack (TIA) where symptoms resolve within 24 hours, although the distinction is now blurred with the advent of thrombolysis. [...] Strokes are broadly categorized into ischaemic and haemorrhagic types, the majority being ischaemic. The pathophysiology in a haemorrhagic stroke is rupture of a blood vessel causing extravasation of blood into the brain substance with tissue damage and disruption of neuronal connections. The resulting haematoma also compresses surrounding normal tissue. In most ischaemic strokes, there is thromboembolic occlusion of vessels due to underlying atherosclerosis of the aortic arch and carotid arteries. In 15–20% of cases, there is atherosclerotic disease of smaller intrinsic blood vessels within the brain[...]. A further 15–20% are due to emboli from the heart. [...] The territory and the extent of the infarct influences the prognosis; [for example] expressive dysphasia and right hemiparesis are attributable to infarcts in Broca’s area and the motor cortex, both frontal lobe territories supplied by the left middle cerebral artery.”
“The stereotypical profile of a gallstone patient is summed up by the 4Fs: female, fat, fertile and forty. However, while gallstones are twice as common in females, increasing age is a more important risk factor. Above the age of 60, 10–20% of the Western population have gallstones. [...] Most people with cholelithiasis are asymptomatic, but there is a 1–4% annual risk of developing symptoms or complications. [...] Complications depend on the size of the stones. Smaller stones may escape into the common bile duct, but may lodge at the narrowing of the hepatopancreatic sphincter (sphincter of Oddi), obstructing the common bile duct and pancreatic duct, leading to obstructive jaundice and pancreatitis respectively. [...] In most series, alcohol and gallstones each account for 30–35% of cases [of acute pancreatitis]. [...] Once symptomatic, the definitive treatment of gallstone disease is generally surgical via a cholecystectomy.”
“Breast cancer affects 1 in 8 women (lifetime risk) in the UK. [...] Between 10 and 40% of women who are found to have a mass by mammography will have breast cancer. [...] The presence of lymphovascular invasion indicates the likelihood of spread of tumour cells beyond the breast, thereby conferring a poorer outlook. Without lymph node involvement, the 10-year disease-free survival is close to 70–80% but falls progressively with the number of involved nodes.”
“Melanoma is a cancer of melanocytes, the pigmented cells in the skin, and is caused by injury to lightly pigmented skin by excessive exposure to ultraviolet (UV) radiation [...] The change in colour of a pre-existing pigmented lesion with itching and bleeding and irregular margins on examination are indicators of transformation to melanoma. Melanomas progress through a radial growth phase to a vertical growth phase. In the radial growth phase, the lesion expands horizontally within the epidermis and superficial dermis often for a long period of time. Progression to the vertical phase is characterized by downward growth of the lesion into the deeper dermis and with absence of maturation of cells at the advancing front. During this phase, the lesion acquires the potential to metastasize through lymphovascular channels. The probability of this happening increases with increasing depth of invasion (Breslow thickness) by the melanoma cells. [...] The ABCDE mnemonic aids in the diagnosis of melanoma: Asymmetry – melanomas are likely to be irregular or asymmetrical. Border – melanomas are more likely to have an irregular border with jagged edges. Colour – melanomas tend to be variegated in colour [...]. Diameter – melanomas are usually more than 7 mm in diameter. Evolution – look for changes in the size, shape or colour of a mole.”
“CLL [chronic lymphocytic leukaemia] is the most common leukaemia in the Western world. Typically, it is picked up via an incidental lymphocytosis in an asymptomatic individual. [...] The disease is staged according to the Binet classification. Typically, patients with Binet stage A disease require no immediate treatment. Symptomatic stage B and all stage C patients receive chemotherapy. [...] cure is rare and the aim is to achieve periods of remission and symptom control. [...] The median survival in CLL is between four and six years, though some patients survive a decade or more. [...] There is [...] a tendency of CLL to transform into a more aggressive leukaemia, typically a prolymphocytic transformation (in 15–30% of patients) or, less commonly (<10% of cases), transformation into a diffuse large B-cell lymphoma (a so-called Richter transformation). Appearance of transformative disease is an ominous sign, with few patients surviving for more than a year with such disease.”
“Pain, swelling, warmth, tenderness and immobility are the five cardinal signs of acute inflammation.”
“Osteomyelitis is an infection of bone that is characterized by progressive inflammatory destruction with the formation of sequestra (dead pieces of bone within living bone), which if not treated leads to new bone formation occurring on top of the dead and infected bone. It can affect any bone, although it occurs most commonly in long bones. [...] Bone phagocytes engulf the bacteria and release osteolytic enzymes and toxic oxygen free radicals, which lyse the surrounding bone. Pus raises intraosseus pressure and impairs blood flow, resulting in thrombosis of the blood vessels. Ischaemia results in bone necrosis and devitalized segments of bone (known as sequestra). These sequestra are important in the pathogenesis of non-resolving infection, acting as an ongoing focus of infection if not removed. Osteomyelitis is one of the most difficult infections to treat. Treatment may require surgery in addition to antibiotics, especially in chronic osteomyelitis where sequestra are present. [...] Poorly controlled diabetics are at increased risk of infections, and having an infection leads to poor control of diabetes via altered physiology occurring during infection. Diabetics are prone to developing foot ulcers, which in turn are prone to becoming infected, which then act as a source of bacteria for infecting the contiguous bones of the feet. This process is exacerbated in patients with peripheral neuropathy, poor diabetic control and peripheral vascular disease, as these all increase the risk of development of skin breakdown and subsequent osteomyelitis.” [The patient was of course a diabetic...]
“Recent onset fever and back pain suggest an upper UTI [urinary tract infection]. UTIs are classified by anatomy into lower and upper UTIs. Lower UTIs refer to infections at or below the level of the bladder, and include cystitis, urethritis, prostatitis, and epididymitis (the latter three being more often sexually transmitted). Upper UTIs refer to infection above the bladder, and include the ureters and kidneys. Infection of the urinary tract above the bladder is known as pyelonephritis [which] may be life threatening or lead to permanent kidney damage if not promptly treated. UTIs are also classified as complicated or uncomplicated. UTIs in men, the elderly, pregnant women, those who have an indwelling catheter, and anatomic or functional abnormality of the urinary tract are considered to be complicated. A complicated UTI will often receive longer courses of broader spectrum antibiotics. Importantly, the clinical history alone of dysuria and frequency (without vaginal discharge) is associated with more than 90% probability of a UTI in healthy women. [...] In women, a UTI develops when urinary pathogens from the bowel or vagina colonize the urethral mucosa, and ascend via the urethra into the bladder. During an uncomplicated symptomatic UTI in women, it is rare for infection to ascend via the ureter into the kidney to cause pyelonephritis. [...] Up to 40% of uncomplicated lower UTIs in women will resolve spontaneously without antimicrobial therapy. The use of antibiotics in this cohort is controversial when taking into account the side effects of antibiotics and their effect on normal flora. If prescribed, antibiotics for uncomplicated lower UTIs should be narrow-spectrum [...] Most healthcare-associated UTIs are associated with the use of urinary catheters. Each day the catheter remains in situ, the risk of UTI rises by around 5%. Thus inserting catheters only when absolutely needed, and ensuring they are removed as soon as possible, can prevent these.”
This will be my last post about the book. I’ve included some observations from the second half of the book below.
“In the present chapter we look at [...] time scales of a few years to a few centuries, up to the life spans of one or a few generations of trees. Change is examined in the context of development and disintegration of the forest canopy, the forest growth cycle [...] There seems to be a general model of forest dynamics which holds in many different biomes, albeit with local variants. [...] Two spatial scales of canopy dynamics can be distinguished: patch disturbance, which involves one or a few trees, and community-wide disturbance. Patch disturbance is sometimes called ‘forest gap-phase dynamics’ and since about the mid-1970s has been one of the main interests of forest scientists in many parts of the world.”
“Species differ in the microclimate in which they successfully regenerate. [...] the microclimates within a rain forest [...] are mainly determined by size of the canopy gap. The microclimate above the forest canopy, which is similar to that in a large clearing, is substantially different from that near the floor below mature phase forest. [...] Outside, wind speeds during the day are higher, as is air temperature, while relative humidity is lower. [...] The light climate within a forest is complex. There are four components, skylight coming through canopy holes, direct sunlight, seen as sunflecks on the forest floor, light transmitted through leaves, and light reflected from leaves, trunks and other surfaces. [...] Both the quantity and quality of light reaching the plant is known to be of profound importance in the mechanisms of gap-phase dynamics [...] The waveband 400 to 700 nm (which is approximately the visual spectrum) is utilized for photosynthesis and is known as photosynthetically active radiation or PAR. The forest floor only receives up to c. 2 per cent of the PAR incident on the forest canopy [...] In addition to reduction in quantity of PAR within the forest canopy, PAR also changes in quality with a shift in the ratio of red to far-red wavelenghts [...] the temporal pattern of sunfleck distribution through the day [...] is of importance, not just the daily total PAR. [...] The role of irradiance in seedling growth and release is easy to observe and has been much investigated. By contrast, little attention has been given to the potential role of plant mineral nutrients. [...] So far, nutrients seem unimportant compared to radiation. [...] Overall the shade/nutrient interaction story remains unresolved. One part of the picture is likely to be that there is no response to nutrients in dark conditions where irradiance is limiting, but a response at higher irradiances.”
“Canopy gaps have an aerial microclimate like that above the forest but the smaller the gap the less different it is from the forest interior [...] Gaps were at first regarded as having a microclimate varying with their size, to be contrasted with closed-forest microclimate. But this is a simplification. [...] gaps are neither homogenous holes nor are they sharply bounded. Within a gap the microclimate is most extreme towards the centre and changes outwards to the physical gap edge and beyond [...] The larger the gap the more extreme the microclimate of its centre. [...] there is much more variability between small gaps than large ones in microclimate [and] gap size is a poor surrogate measure of microclimate, most markedly over short periods.”
“tree species differ in the amount of solar radiation required for their regeneration. [...] Ecologists and foresters continue to engage in vigorous debate as to whether species along [the] spectrum of light climates can be divided into clear, separate groups. [...] some strong light-demanders require full light for both seed germination and seedling establishment. These are the pioneer species, set apart from all others by these two features. By contrast, all other species have the capacity to germinate and establish below canopy shade. These may be called climax species. They are able to perpetuate in the same place, but are an extremely diverse group. [...] Pioneer species germinate and establish in a gap after its creation [...] They grow fast [...] Below the canopy seedlings of climax species establish and, as the pioneer canopy breaks up after the death of individual trees, these climax species are ‘released’ [...] and grow up as a second growth cycle. Succession has occurred as a group of climax species replaces the group of pioneer species.[...] Climax species as a group [...] perpetuate themselves in situ, there is no directional change in species composition. This is called cyclic regeneration or replacement. In a small gap, pre-existing climax seedlings are released. In a large gap pioneers, which appear after gap creation, form the next forest growth cycle. One of the puzzles which remains unsolved is what determines gap-switch size. [...] In all tropical rain forest floras there are fewer pioneer than climax species, and they mostly belong to a few families [...] The most species-rich forested landscape will be one that includes both patches of secondary forest recovering from a big disturbance and consisting of pioneers, and also patches of primary forest composed of climax species.”
“Rain forest silviculture is the manipulation of the forest to favour species and thereby to enhance its value to humans. [...] Timber properties, whether heavy or light, dark or pale, durable or not, are strongly correlated with growth rate and thus to the extent to which the species is light-demanding [...]. Thus, the ecological basis of natural forest silviculture is the manipulation of the forest canopy. The biological principle of silviculture is that by controlling canopy gap size it is possible to influence species composition of the next growth cycle. The bigger the gaps the more fast-growing light-demanders will be favoured. This concept has been known in continental Europe since at least the twelth century. [...] The silvicultural systems that have been applied to tropical rain forests belong to one of two kinds: the polycyclic and monocyclic systems, respectively [...]. As the name implies, polycyclic systems are based on the repeated removal of selected trees in a continuing series of felling cycles, whose length is less than the time it takes the tree to mature [rotation age]. The aim is to remove trees before they begin to deteriorate from old age [...] extraction on a polycyclic system tends to result in the formation of scattered small gaps in the forest canopy. By contrast, monocyclic systems remove all saleable trees at a single operation, and the length of the cycle more or less equals the rotation age of the trees. Except in those cases where there are few saleable trees, damage to the forest is more drastic than under a polycyclic system, the canopy is more extensively destroed, and bigger gaps are formed. [...] the two kinds of system will tend to favour shade-bearing and light-demanding species, respectively, but the extent of the difference will depend on how many trees are felled at each cycle in a polycyclic system. [...] Low intensity selective logging on a polycyclic system closely mimics the natural processes of forest dynamics and scarcely alters the composition. Monocyclic silvicultural systems, and polycyclic systems with many stems felled per hectare, shift species composition [...] The amount of damage to the forest depends more on how many trees are felled than on timber volume extracted. It is commonly the case that for every tree removed for timber (logged) a second tree is totally smashed and a third tree receives damage from which it will recover”
“The essense of shifting agriculture (sometimes called swidden agriculture) is to fell a patch of forest, allow it to dry to the point where it will burn well, and then to set it on fire. The plant mineral nutrients are thereby mobilized and become available to plants in the ash. One or two fast-maturing crops of staple food species are grown [...]. Yields then fall and the patch is abandoned to allow secondary forest to grow. Longer-lived species, such as chilli [...] and fruit trees, and some root crops such as cassava [...] are planted with the staples and continue to yield in the first years of the fallow period. Besides fruit and root crops the bush fallow, as it is often called, provides firewood, medicines, and building materials. After a minimum of 7 to 10 years the cycle can be repeated. There are many variants. Shifting agriculture was invented independently in all parts of the tropical world and has proved sustainable over many centuries. [...] It is now realized that shifting agriculture, as traditionally practised, is a sustainable low-input form of cultivation which can continue indefinitely on the infertile soils underlying most tropical rain forest [...], provided the carrying capacity of the land is not exceeded. [...] Shifting agriculture has the limitation that it can usually only support 10-20 persons km-2 [...] because at any one time only c. 10 per cent of the area is under cultivation. It breaks down if either the bush fallow period is excessively shortened or if the period of cultivation is extended for too long, either of which is likely to occur if population increases and a land shortage develops. There is, however, another mode of shifting agriculture which is totally destructive [...]. Farmers fell and burn the forest and grow crops on the released nutrients for several years in succession, continuing until coppicing potential and the soil seed bank are exhausted, pernicious weeds invade, and soil nutrients are seriously depleted. They then move on to a new patch of virgin forest. This is happening, for example, in parts of western Amazonia [...] Replacement of forests by agriculture totally destroys them. If farmland is abandoned it is likely to take several centuries before all signs of forest succession have disappeared, and species-rich, structurally complex primary forest restored [...] Agriculture is the main purpose for which rain forests are cleared. There are several major kinds of agriculture and their impact varies from place to place. Important detail is lost by pan-tropical generalization.”
“The mixed cultivation of trees and crops, agroforestry [...], makes use of nutrient cycling by trees, as does shifting agriculture. Trees act as pumps, bringing nutrients into the superficial layers of the soil where shallow-rooted herbacious crops can utilize them. [...] Early research led to the belief that nearly all the mineral nutrients in tropical rain forests are in the above-ground biomass and, despite much evidence to the contrary, this view is still sometimes expressed. [However] the popular belief that most of the nutrients of a tropical rain forest are in the biomass is seldom true.”
“Given a rich regional flora, forests are particularly favourable for the co-existence of many species in the same community, because they provide many different niches. [...] The forest provides a whole array of different internal microclimates, both horizontally and vertically [recall this related observation from McMenamin & McMenamin: “One aspect of the environment that controls the number and types of organisms living in the environment is called its dimensionality [...]. Two-dimensional (or Dimension 2) environments tend to be flat, whereas three-dimensional environments (Dimension 3) have, to a greater or lesser degree, a third dimension. This third dimension can be either in an upward or a downward direction, or a combination of both directions.” Additional dimensions add additional opportunities for specialization.] [...] The same processes operate in all forests but forests have different degrees of complexity in canopy structure and differ in the number of species that occupy the many facets of what may be termed the ‘regeneration niche’. [...] one-to-one specialization between a single plant and animal species as a factor of species richness exists only in a few cases [...] Guilds of insects specialized to feed on (and where necessary detoxify) particular families or similar families of plants [...] is a looser and commoner form of co-evolution and plays a more substantial role in the packing together of numerous sympatric species [...] Browsing pressure (‘pest pressure’) of herbivores [...] may be one factor that sometimes prevents any single species from attaining dominance, and acts to maintain species richness. In a similar manner dense seedling populations below a parent tree are often thinned out by disease or herbivory [...] and this also therefore contributes to the prevention of single species dominance.”
“An important difference of tropical rain forests from others is the occurence of locally endemic species [...]. This is one component of their species richness on the extensive scale. It means that in different places a particular niche may be occupied by different species which never compete because they never meet. It has the consequence that species are likely to become extinct when a rain forest is reduced in extent, more so than in other forest biomes. [...] the main reasons why some tropical rain forests are extremely rich in species results from firstly, a long stable climatic history without episodes of extinction, in an equable environment, and in which there is no ‘climatic sieve’ to eliminate some species. Secondly, a forest canopy provides large numbers of spatial and temporal niches [...] Thirdly, richness results from interactions with animals, mainly as pollinators, dispersers, or pests. Some of these factors underly species richness in other biomes also. [...] The overall effeect of all of humankind’s many different impacts on tropical rain forests is to diminish the numerous dimensions of species richness. Not only does man destroy species, he also simplifies the ecosystems the remaining species inhabit.”
“the claim sometimes made that rain forests contain enormous numbers of drugs just awaiting exploitation does not survive critical examination. Reality is more complex, and there are serious difficulties in developing an economic case for biodiversity conservation based on undiscovered pharmaceuticals. [...] The cessation of logging is [likewise] not a realistic option, as too much money is at stake for both the nations and individuals involved.”
“Animal geneticists have given considerable thought to the question of how many individuals are necessary to maintain the full genetic integrity of a species in perpetuity. Much has been learned from zoos. A simple but extremely crude rule-of-thumb is that a minimum population of 50 breeding adults maintains fitness in the short term, thus preserving a species ‘frozen’ at one instant of time. To prevent continual loss of genetic diversity (‘genetic erosion’) over the long term [...] requires a big population, and a minimum of 500 breeding adults has been suggested to be necessary. This 50/500 rule is only a very rough approximation and can differ widely between species. [...] Most difficult to conserve are animals (or indeed plants too) that live at very low population density (e.g. hornbills, tapir, and top carnivores, such as jaguar and tiger), or that have large territories (e.g. gaur, elephant) [...] Increasingly in the future, tropical rain forest will only remain as fragments. [...] There is a problem that such fragments may break the 50/500 rule [...] and contain too few individuals of a species for its long-term genetic integrity. Species that occur at low density are especially vulnerable to genetic erosion, to chance extinction when numbers fall [...], or to inbreeding depression. In particular, many trees live several centuries and may be persisting today but unable to breed, so the species is ‘living but dead’, doomed to extinction. [...] small forest remnants may be too small to support certain species and this may have repercussions on other components of the ecosystem. [...] Besides reduction in area, forest fragmentation also increases the proportion of edge relative to interior [...] and if the fragments are surrounded by open land this will result in a change of microclimate.”
I’ve started playing active tournament chess again, at least a little bit. The format of the tournament in which I’m participating at the moment is a rapid format, with 45 minutes per player per game, with two games per round – one game with the white pieces and one with the black pieces, against the same opponent. Below I have posted the first four games I’ve played in the tournament so far – this is a short post, but each game lasted a significant amount of time.
The two first games are actually, I think, quite instructive in that I played against a much lower rated player and managed to win both games in roughly 20 moves. There’s a reason strong chess players do not lose to beginners and games like these will tell you part of the story of why this is.
The last two games were not particularly great and I certainly was not satisfied with my play in either of those, especially not the second one – I had a winning position out of the opening, yet I somehow managed to blunder a piece in the middle game. 17…Rad8 was a blunder (the idea was 18.Bxc6… Rxd2, followed by 19.Qxd2 and …Qxa1+, and after Qd1 (forced) black takes on a2), whereas after 17…e4! the computer gives black an advantage of roughly -4,5 (an advantage corresponding to almost an entire rook, even though the position is materially balanced – I knew the position was winning, but you still have to find the right moves..). I’d of course missed the check on d5 and Rc1, which were played in the game. I considered taking on d5 after the bishop check, and actually my intuition was correct that this was completely winning (the position is at -2 or so after the exchange sac, according to the engine – this is not surprising as white is basically playing without the rook on h1 and also has an exposed king in an open position) – but in the end I decided not to play this as ‘Kh8 is surely winning as well, and an exchange sac is not necessary here’. I was wrong. Actually the position arising shortly after the blunder around move 20 is a good illustration of how important piece activity is; the position after 20…e4, where black is basically a whole piece down, according to the engine is still better for black (-0.3). Black has a lot of activity for the material, despite this ‘sacrifice’ of course being completely unnecessary.
Given that I’ve won all my games so far it’s not possible to calculate a performance rating yet, but I’d say that in terms of results at least I’m doing okay, though perhaps not much better than could have been expected. Anyway the way this tournament works, the more games you win the tougher opponents you get – I was the rating favourite in both of the matches I’ve played so far, and that’ll change soon enough; I may easily end up playing against a 2200 Elo opponent next round, so I expect to lose and/or draw some games quite soon. If you’re interested in me sharing more of my games here later on, let me know in the comments – I think that if I don’t get any indications that people reading along here would like to see another post like this one again, this will probably be my last post of this kind. I know some of my readers are interested in chess, at least a little bit, but that’s not the same thing as finding posts like this one interesting.
I should note that the internet issue I have had has now, as far as I can tell, been solved. This should make it much easier for me to blog from now on than it has been for the last couple of weeks.
First an update on the issues I mentioned earlier this week: I had a guy come by and ‘fix the internet problem’ yesterday. Approximately an hour after he left I lost my connection, and it was gone for the rest of the day. I have internet now. If the problem is not solved by a second visit on Monday (they’ll send another guy over), the ISP just lost a customer – I’ll give them no more chances, I can’t live like this. The uncertainty is both incredibly stressful and frankly infuriating. I actually lost internet while writing this post. Down periods seem completely random and may last from 5 minutes to 12 hours. I’m much more dependent on the internet than are most people in part because most of my social interaction with others takes place online.
I’ve read four Christie novels within the last week and I finished The Gambler by Dostoyevsky earlier today – in case you were wondering why I’ve suddenly started reading a lot of fiction, the answer is simple: I’m awake for 16+ hours each day, and if I can’t go online to relax during my off hours I have to find some other way to distract-/enjoy-/whatever myself. Novels are one of the tools I’ve employed.
The internet issue is more important than the computer issue also in terms of the blogging context; the computer I’m using at the moment is unreliable, but seems to cause a limited amount of trouble when I’m doing simple stuff like blogging.
Okay, on to the book. I was rather harsh in my first post, but I did also mention that it had a lot of good stuff. I’ve included some of that stuff in this post below.
“Forests, because of their stature, have internal microclimates that differ from the general climate outside the canopy. [...] In general terms, it is cool, humid, and dark near the floor of a mature patch of forest, progressively altering upwards to the canopy top. Different plants and animal species have specialized to the various forest interior microclimates [...] Night is the winter of the tropics, because the diurnal range of mean daily temperature exceeds the annual range and is greater in drier months. [...] Rain forests develop where every month is wet (with 100 mm rainfal or more), or there are only short dry periods which occur mainly as unpredictable spells lasting only a few days or weeks. Where there are several dry months (60 mm rainfal or less) of regular occurence, monsoon forests exist. Outside Asia these are usually called tropical seasonal forests. [...] To the biologist [...] there are major differences, and this book is about tropical rain forests, those which occur in the everwet (perhumid) climates, with only passing mention of monsoon forests.”
“Tropical rain forests occur in all three tropical land areas [...]. Most extensive are the American or neotropical rain forests, about half the global total, 4 x 106 km2 in area, and one-sixth of the total broad-leaf forest of the world. [...] The second largest block of tropical rain forest occurs in the Eastern tropics, and is estimated to cover 2.5 x 106 km2. It is centred on the Malay archipelago, the region known to botanists as Malesia. Indonesia occupies most of the archipelago and is second to Brazil in the amount of rain forest it possesses. [...] Africa has the smallest block of tropical rain forest, 1.8 x 106 km2. This is centred on the Congo basin, reaching from the high mountains at its eastern limit westwards to the Atlantic Ocean, with outliers in East Africa. [...] Outside the Congo core the African rain forests have been extensively destroyed.”
“It is now believed that about half the world’s species occur in tropical rain forests although they only occupy about seven per cent of the land area. [...] Just how many species the world’s rain forests contain is still [...] only a matter of rough conjecture. For mammals, birds, and other larger animals there are roughly twice as many species in tropical regions as temperate ones [...]. These groups are fairly well studied, insects and other invertebrates much less so [...] The humid tropics are extremely rich in plant species. Of the total of approximately 250 000 species of flowering plants in the world, about two-thirds (170 000) occur in the tropics. Half of these are in the New World south of the Mexico/US frontier, 21 000 in tropical Africa (plus 10 000 in Madagascar) and 50 000 in tropical and subtropical Asia, with 36 000 in Malesia. [...] There are similarities, especially at family level, between all three blocks of tropical rain forest, but there are fewer genera in common and not many species. [...] In flora Africa has been called ‘the odd man out'; there are fewer families, fewer genera, and fewer species in her rain forests than in either America or Asia. For example, there are 18 genera and 51 species of native palms on Singapore island, as many as on the whole of mainland Africa (15 genera, 50 species) [...] There are also differences within each rain forest region. [...] meaningful discussions of species richness must specify scale. For example, we may usefully compare richness within rain forests by counting tree species on plots of c. 1 ha. This within-community diversity has been called alpha diversity. At the other extreme we can record species richness of a whole landscape made up of several communities, and this has been called gamma diversity. The fynbos is very rich with 8500 species on 89 000 km2. It is made up of a mosaic of different floristic communities, each of which has rather few species. That is to say fynbos has low alpha and high gamma diversity. Within a single floristic community species replace each other from place to place. This gives a third component to richness, known as beta diversity. For example, within lowland rain forest there are differences in species within a single community between ridges, hillsides, and valleys.”
“Most rain forest trees [...] exhibit intermittent shoot growth [...] The intermittent growth of the shoot tips is seldom reflected by growth rings in the wood, and where it is these are not annual and often not annular either. Rain forest trees, unlike those of seasonal climates, cannot be aged by counting wood rings [...] tree age cannot be measured directly. It has [also] been found that the fastest growing juvenile trees in a forest are the ones most likely to succeed, so growth rates averaged from a number of stems are misleading. [...] we have very little reliable information on how long trees can live. [...] Most of the root biomass is in the top 0.3 m or so of the soil and there is sometimes a concentration or root mat at the surface. [...] Roots up to 2 mm in diameter form 20-50 per cent of the total root biomass and their believed rapid turnover is probably a significant part of ecosystem nutrient cycles”
“Besides differences between the three tropical regions there are other differences within them. One major pattern is that within the African and American rain forests there are areas of especially high species richness, set like islands in a sea of relative poverty. [...] No such patchiness has been detected in Asia, where the major pattern is set by Wallace’s Line, one of the sharpest zoogeographical boundaries in the world and which delimits the continental Asian faunas from the Australasian [...]. These patterns are now realized to have explanations based on Earth['s] history [...] Gondwanaland and Laurasia were [originally] separated by the great Tethys Ocean. Tethys was closed by the northwards movement of parts of Gondwanaland [...]. First Africa and then India drifted north and collided with the southern margin of Laurasia. Further east the continental plate which comprised Antarctica/Australia/southern New Guinea moved northwards, broke in two leaving Antarctica behind, and, as a simplification, collided with the southeast extremity of Laurasia, at about 15 million years ago, the mid-Miocene; this created the Malay archipelago (Malesia) as it exists today. Both super-continents had their own sets of plants and animals. [...] Western and eastern Malesia have very different animals, demarcated by a very sharp boundary, Wallace’s line. [...] the evolution of the Malay archipelago was in fact more complex than a single collision. Various shards progressively broke off Gondwana from the Jurassic onwards, drifted northwards, and became embedded in what is now continental Asia [...] The climate of the tropics has been continually changing. The old idea of fixity is quite wrong; climatic changes have had profound influences on species ranges.”
“Most knowledge about past climates is for the last 2 million years, the Quaternary period, during which there has been repeated alternation at high latitudes near the poles between Ice Ages or Glacial periods and Interglacials. During Glacial periods tropical climates were slightly cooler and drier, with lower and more seasonal rainfall. During these times rain forests became less extensive and seasonal forests expanded. Most of the Quaternary was like that; present-day climates are extreme and not typical of the period as a whole. Today we live at the height of an Interglacial. [...] At the Glacial maxima sea levels were lower by as much as 180 m [...] Sea surface temperature was cooler than today, by 5 ° C or more at 18 000 BP in the tropics. [...] Rain forests were more extensive than at any time in the Quaternary during the early Pliocene, parts of the Miocene, and especially the early Eocene; so these were all warm periods. Then, in the late Tertiary, fluctuations similar to those of the Quaternary occurred. [...] Africa [as mentioned] has a much poorer flora than the other two rain forest regions. This is believed to be because it was much more strongly dessicated during the Tertiary. [...] Australia too suffered strong Tertiary dessication. At that time its mesic vegetation became mainly confined to the eastern seaboard. The strip of tropical rain forests found today in north Queensland is only 2-30 km wide and is of particular interest because it contains the relicts of the old mesic flora. This includes the ancestors from which many modern Australian species adapted to hot dry climates are believed to have evolved [...] New Caledonia is a shard of Gondwanaland which drifted away eastwards from northeast Australia starting in the Upper Cretaceous 82 million BP. Because it is an island its vegetation has suffered less from the drier Glacial climates so more of the old flora has survived. The lands bordering the western Pacific have the greatest concentration of primitive flowering plants found anywhere [...] It is most likely that they survived here as relicts.”
“rain forests have waxed and waned in extent during the Quaternary, and probably in the Tertiary too, and are not the ancient and immutable bastions where life originated which populist writings still sometimes suggest. In the present Interglacial they are as extensive as they have ever been, or nearly so. At glacial maxima lowland rain forests are believed to have contracted and only to have persisted in places where conditions remained favourable for them, as patches surrounded by tropical seasonal forests, like islands set in a sea. In subsequent Interglacials, as perhumid conditions returned, the rain forests expanded out of these patches, which have come to be called Pleistocene refugia. In the late 1960s it was shown that within Amazonia birds have areas of high species endemism and richness which are surrounded by relatively poorer areas. The same was soon demonstrated for lizards. Subsequently many groups of animals have been shown to exhibit such patchiness [...] The centres of concentration more or less coincide with each other [...] These loci overlap with areas that geoscientific evidence suggests retained rain forest during Pleistocene glaciations [...] In the African rain forests four groups of loci of species richness and endemism are now recognized [...] Most parts of Malesia today are about as equally rich in species, including endemics, as the Pleistocene refugia of Africa and America. At the Glacial maxima the Sunda and Sahul continental shelves were exposed by falling sealevel. Rain forests were likely to have become confined to the more mountaineous places where there was more, orographic, rain. The main development of seasonal forests in this region is likely to have been on the newly exposed lowlands, and when sea-level rose again at the next Interglacial these and the physical signs of seasonal climates [...] were drowned. The parts of Malesia that are above sea-level today probably remained, largely perhumid and covered by rain forest, which explains their extreme species richness and their lack of geoscientific evidence of seasonal past climates. [...] Present-day lowland rain forest communities consist of plant and animal species that have survived past climatic vicissitudes or have immigrated since the climate ameliorated. Thus many species co-exist today as a result of historical chance, not because they co-evolved together. Their communities are neither immutable nor finely tuned. This point is of great importance to the ideas scientists have expressed concerning plant-animal interactions [...] Those parts of the world’s tropical rain forests that are most rich in species are those that the evidence shows have been the most stable, where species have evolved and continued to accumulate with the passage of time without episodes of extinction caused by unfavourable climatic periods. This is similar to the pattern observed in other forest biomes”