Econstudentlog

Introduction to Meta Analysis (I)

“Since meta-analysis is a relatively new field, many people, including those who actually use meta-analysis in their work, have not had the opportunity to learn about it systematically. We hope that this volume will provide a framework that allows them to understand the logic of meta-analysis, as well as how to apply and interpret meta-analytic procedures properly.

This book is aimed at researchers, clinicians, and statisticians. Our approach is primarily conceptual. The reader will be able to skip the formulas and still understand, for example, the differences between fixed-effect and random-effects analysis, and the mechanisms used to assess the dispersion in effects from study to study. However, for those with a statistical orientation, we include all the relevant formulas, along with worked examples. […] This volume is intended for readers from various substantive fields, including medicine, epidemiology, social science, business, ecology, and others. While we have included examples from many of these disciplines, the more important message is that meta-analytic methods that may have developed in any one of these fields have application to all of them.”

I’ve been reading this book and I like it so far – I’ve read about the topic before but I’ve been missing a textbook on this topic, and this one is quite good so far (I’ve read roughly half of it so far). Below I have added some observations from the first thirteen chapters of the book:

“Meta-analysis refers to the statistical synthesis of results from a series of studies. While the statistical procedures used in a meta-analysis can be applied to any set of data, the synthesis will be meaningful only if the studies have been collected systematically. This could be in the context of a systematic review, the process of systematically locating, appraising, and then synthesizing data from a large number of sources. Or, it could be in the context of synthesizing data from a select group of studies, such as those conducted by a pharmaceutical company to assess the efficacy of a new drug. If a treatment effect (or effect size) is consistent across the series of studies, these procedures enable us to report that the effect is robust across the kinds of populations sampled, and also to estimate the magnitude of the effect more precisely than we could with any of the studies alone. If the treatment effect varies across the series of studies, these procedures enable us to report on the range of effects, and may enable us to identify factors associated with the magnitude of the effect size.”

“For systematic reviews, a clear set of rules is used to search for studies, and then to determine which studies will be included in or excluded from the analysis. Since there is an element of subjectivity in setting these criteria, as well as in the conclusions drawn from the meta-analysis, we cannot say that the systematic review is entirely objective. However, because all of the decisions are specified clearly, the mechanisms are transparent. A key element in most systematic reviews is the statistical synthesis of the data, or the meta-analysis. Unlike the narrative review, where reviewers implicitly assign some level of importance to each study, in meta-analysis the weights assigned to each study are based on mathematical criteria that are specified in advance. While the reviewers and readers may still differ on the substantive meaning of the results (as they might for a primary study), the statistical analysis provides a transparent, objective, and replicable framework for this discussion. […] If the entire review is performed properly, so that the search strategy matches the research question, and yields a reasonably complete and unbiased collection of the relevant studies, then (providing that the included studies are themselves valid) the meta-analysis will also be addressing the intended question. On the other hand, if the search strategy is flawed in concept or execution, or if the studies are providing biased results, then problems exist in the review that the meta-analysis cannot correct.”

“Meta-analyses are conducted for a variety of reasons […] The purpose of the meta-analysis, or more generally, the purpose of any research synthesis has implications for when it should be performed, what model should be used to analyze the data, what sensitivity analyses should be undertaken, and how the results should be interpreted. Losing sight of the fact that meta-analysis is a tool with multiple applications causes confusion and leads to pointless discussions about what is the right way to perform a research synthesis, when there is no single right way. It all depends on the purpose of the synthesis, and the data that are available.”

“The effect size, a value which reflects the magnitude of the treatment effect or (more generally) the strength of a relationship between two variables, is the unit of currency in a meta-analysis. We compute the effect size for each study, and then work with the effect sizes to assess the consistency of the effect across studies and to compute a summary effect. […] The summary effect is nothing more than the weighted mean of the individual effects. However, the mechanism used to assign the weights (and therefore the meaning of the summary effect) depends on our assumptions about the distribution of effect sizes from which the studies were sampled. Under the fixed-effect model, we assume that all studies in the analysis share the same true effect size, and the summary effect is our estimate of this common effect size. Under the random-effects model, we assume that the true effect size varies from study to study, and the summary effect is our estimate of the mean of the distribution of effect sizes. […] A key theme in this volume is the importance of assessing the dispersion of effect sizes from study to study, and then taking this into account when interpreting the data. If the effect size is consistent, then we will usually focus on the summary effect, and note that this effect is robust across the domain of studies included in the analysis. If the effect size varies modestly, then we might still report the summary effect but note that the true effect in any given study could be somewhat lower or higher than this value. If the effect varies substantially from one study to the next, our attention will shift from the summary effect to the dispersion itself.”

“During the time period beginning in1959 and ending in 1988 (a span of nearly 30 years) there were a total of 33 randomized trials performed to assess the ability of streptokinase to prevent death following a heart attack. […] The trials varied substantially in size. […] Of the 33 studies, six were statistically significant while the other 27 were not, leading to the perception that the studies yielded conflicting results. […] In 1992 Lau et al. published a meta-analysis that synthesized the results from the 33 studies. […] [They found that] the treatment reduces the risk of death by some 21%. And, this effect was reasonably consistent across all studies in the analysis. […] The narrative review has no mechanism for synthesizing the p-values from the different studies, and must deal with them as discrete pieces of data. In this example six of the studies were statistically significant while the other 27 were not, which led some to conclude that there was evidence against an effect, or that the results were inconsistent […] By contrast, the meta-analysis allows us to combine the effects and evaluate the statistical significance of the summary effect. The p-value for the summary effect [was] p=0.0000008. […] While one might assume that 27 studies failed to reach statistical significance because they reported small effects, it is clear […] that this is not the case. In fact, the treatment effect in many of these studies was actually larger than the treatment effect in the six studies that were statistically significant. Rather, the reason that 82% of the studies were not statistically significant is that these studies had small sample sizes and low statistical power.”

“the [narrative] review will often focus on the question of whether or not the body of evidence allows us to reject the null hypothesis. There is no good mechanism for discussing the magnitude of the effect. By contrast, the meta-analytic approaches discussed in this volume allow us to compute an estimate of the effect size for each study, and these effect sizes fall at the core of the analysis. This is important because the effect size is what we care about. If a clinician or patient needs to make a decision about whether or not to employ a treatment, they want to know if the treatment reduces the risk of death by 5% or 10% or 20%, and this is the information carried by the effect size. […] The p-value can tell us only that the effect is not zero, and to report simply that the effect is not zero is to miss the point. […] The narrative review has no good mechanism for assessing the consistency of effects. The narrative review starts with p-values, and because the p-value is driven by the size of a study as well as the effect in that study, the fact that one study reported a p-value of 0.001 and another reported a p-value of 0.50 does not mean that the effect was larger in the former. The p-value of 0.001 could reflect a large effect size but it could also reflect a moderate or small effect in a large study […] The p-value of 0.50 could reflect a small (or nil) effect size but could also reflect a large effect in a small study […] This point is often missed in narrative reviews. Often, researchers interpret a nonsignificant result to mean that there is no effect. If some studies are statistically significant while others are not, the reviewers see the results as conflicting. This problem runs through many fields of research. […] By contrast, meta-analysis completely changes the landscape. First, we work with effect sizes (not p-values) to determine whether or not the effect size is consistent across studies. Additionally, we apply methods based on statistical theory to allow that some (or all) of the observed dispersion is due to random sampling variation rather than differences in the true effect sizes. Then, we apply formulas to partition the variance into random error versus real variance, to quantify the true differences among studies, and to consider the implications of this variance.”

“Consider […] the case where some studies report a difference in means, which is used to compute a standardized mean difference. Others report a difference in proportions which is used to compute an odds ratio. And others report a correlation. All the studies address the same broad question, and we want to include them in one meta-analysis. […] we are now dealing with different indices, and we need to convert them to a common index before we can proceed. The question of whether or not it is appropriate to combine effect sizes from studies that used different metrics must be considered on a case by case basis. The key issue is that it only makes sense to compute a summary effect from studies that we judge to be comparable in relevant ways. If we would be comfortable combining these studies if they had used the same metric, then the fact that they used different metrics should not be an impediment. […]  When some studies use means, others use binary data, and others use correlational data, we can apply formulas to convert among effect sizes. […] When we convert between different measures we make certain assumptions about the nature of the underlying traits or effects. Even if these assumptions do not hold exactly, the decision to use these conversions is often better than the alternative, which is to simply omit the studies that happened to use an alternate metric. This would involve loss of information, and possibly the systematic loss of information, resulting in a biased sample of studies. A sensitivity analysis to compare the meta-analysis results with and without the converted studies would be important. […] Studies that used different measures may [however] differ from each other in substantive ways, and we need to consider this possibility when deciding if it makes sense to include the various studies in the same analysis.”

“The precision with which we estimate an effect size can be expressed as a standard error or confidence interval […] or as a variance […] The precision is driven primarily by the sample size, with larger studies yielding more precise estimates of the effect size. […] Other factors affecting precision include the study design, with matched groups yielding more precise estimates (as compared with independent groups) and clustered groups yielding less precise estimates. In addition to these general factors, there are unique factors that affect the precision for each effect size index. […] Studies that yield more precise estimates of the effect size carry more information and are assigned more weight in the meta-analysis.”

“Under the fixed-effect model we assume that all studies in the meta-analysis share a common (true) effect size. […] However, in many systematic reviews this assumption is implausible. When we decide to incorporate a group of studies in a meta-analysis, we assume that the studies have enough in common that it makes sense to synthesize the information, but there is generally no reason to assume that they are identical in the sense that the true effect size is exactly the same in all the studies. […] Because studies will differ in the mixes of participants and in the implementations of interventions, among other reasons, there may be different effect sizes underlying different studies. […] One way to address this variation across studies is to perform a random-effects meta-analysis. In a random-effects meta-analysis we usually assume that the true effects are normally distributed. […] Since our goal is to estimate the mean of the distribution, we need to take account of two sources of variance. First, there is within-study error in estimating the effect in each study. Second (even if we knew the true mean for each of our studies), there is variation in the true effects across studies. Study weights are assigned with the goal of minimizing both sources of variance.”

“Under the fixed-effect model we assume that the true effect size for all studies is identical, and the only reason the effect size varies between studies is sampling error (error in estimating the effect size). Therefore, when assigning weights to the different studies we can largely ignore the information in the smaller studies since we have better information about the same effect size in the larger studies. By contrast, under the random-effects model the goal is not to estimate one true effect, but to estimate the mean of a distribution of effects. Since each study provides information about a different effect size, we want to be sure that all these effect sizes are represented in the summary estimate. This means that we cannot discount a small study by giving it a very small weight (the way we would in a fixed-effect analysis). The estimate provided by that study may be imprecise, but it is information about an effect that no other study has estimated. By the same logic we cannot give too much weight to a very large study (the way we might in a fixed-effect analysis). […] Under the fixed-effect model there is a wide range of weights […] whereas under the random-effects model the weights fall in a relatively narrow range. […] the relative weights assigned under random effects will be more balanced than those assigned under fixed effects. As we move from fixed effect to random effects, extreme studies will lose influence if they are large, and will gain influence if they are small. […] Under the fixed-effect model the only source of uncertainty is the within-study (sampling or estimation) error. Under the random-effects model there is this same source of uncertainty plus an additional source (between-studies variance). It follows that the variance, standard error, and confidence interval for the summary effect will always be larger (or wider) under the random-effects model than under the fixed-effect model […] Under the fixed-effect model the null hypothesis being tested is that there is zero effect in every study. Under the random-effects model the null hypothesis being tested is that the mean effect is zero. Although some may treat these hypotheses as interchangeable, they are in fact different”

“It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. […] this situation is relatively rare. […] By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model. […] There is one caveat to the above. If the number of studies is very small, then the estimate of the between-studies variance […] will have poor precision. While the random-effects model is still the appropriate model, we lack the information needed to apply it correctly. In this case the reviewer may choose among several options, each of them problematic [and one of which is to apply a fixed effects framework].”

October 28, 2014 Posted by | Books, Statistics | Leave a comment

Quotes

i. “If people spent as much time studying as they spent hating, I’d be writing this from a goddamn moon-base.” (Zach Weiner)

ii. “Experience comprises illusions lost, rather than wisdom gained.” (Joseph Roux)

iii. “The happiness which is lacking makes one think even the happiness one has unbearable.” (-ll-)

iv. “Men are more apt to be mistaken in their generalizations than in their particular observations.” (Niccolo Machiavelli)

v. “A good reputation is more valuable than money.” (Publilius Syrus)

vi. “Many receive advice, few profit by it.” (-ll-)

vii. “Anyone can hold the helm when the sea is calm.” (-ll-)

viii. “To forget the wrongs you receive, is to remedy them.” (-ll-)

ix. “It is a bad plan that admits of no modification.” (-ll-)

x. “No one knows what he can do till he tries.” (-ll-)

xi. “Everything is worth what its purchaser will pay for it.” (-ll-)

xii. “I have often regretted my speech, never my silence.” (-ll-)

xiii. “We give to necessity the praise of virtue.” (Quintilian)

xiv. “Those who wish to appear wise among fools, among the wise seem foolish.” (-ll-)

xv. “Shared danger is the strongest of bonds; it will keep men united in spite of mutual dislike and suspicion.” (Titus Livius Patavinus)

xvi. “Favor and honor sometimes fall more fitly on those who do not desire them.” (-ll-)

xvii. “Men are only too clever at shifting blame from their own shoulders to those of others.” (-ll-)

xviii. “Men are slower to recognise blessings than misfortunes.” (-ll-)

xix. “It is easier to criticize than to correct our past errors.” (-ll-)

xx. “There is an old saying which, from its truth, has become proverbial, that friendships should be immortal, enmities mortal.” (-ll-)

 

 

 

October 26, 2014 Posted by | Quotes/aphorisms | Leave a comment

Wikipedia articles of interest

(A minor note: These days when I’m randomly browsing wikipedia and not just looking up concepts or terms found in the books I read, I’m mostly browsing the featured content on wikipedia. There’s a lot of featured stuff, and on average such articles more interesting than random articles. As a result of this approach, all articles covered in the post below are featured articles. A related consequence of this shift may be that I may cover fewer articles in future wikipedia posts than I have in the past; this post only contains five articles, which I believe is slightly less than usual for these posts – a big reason for this being that it sometimes takes a lot of time to read a featured article.)

i. Woolly mammoth.

Ice_age_fauna_of_northern_Spain_-_Mauricio_Antón

“The woolly mammoth (Mammuthus primigenius) was a species of mammoth, the common name for the extinct elephant genus Mammuthus. The woolly mammoth was one of the last in a line of mammoth species, beginning with Mammuthus subplanifrons in the early Pliocene. M. primigenius diverged from the steppe mammoth, M. trogontherii, about 200,000 years ago in eastern Asia. Its closest extant relative is the Asian elephant. […] The earliest known proboscideans, the clade which contains elephants, existed about 55 million years ago around the Tethys Sea. […] The family Elephantidae existed six million years ago in Africa and includes the modern elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved.[12] […] The woolly mammoth coexisted with early humans, who used its bones and tusks for making art, tools, and dwellings, and the species was also hunted for food.[1] It disappeared from its mainland range at the end of the Pleistocene 10,000 years ago, most likely through a combination of climate change, consequent disappearance of its habitat, and hunting by humans, though the significance of these factors is disputed. Isolated populations survived on Wrangel Island until 4,000 years ago, and on St. Paul Island until 6,400 years ago.”

“The appearance and behaviour of this species are among the best studied of any prehistoric animal due to the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. […] Fully grown males reached shoulder heights between 2.7 and 3.4 m (9 and 11 ft) and weighed up to 6 tonnes (6.6 short tons). This is almost as large as extant male African elephants, which commonly reach 3–3.4 m (9.8–11.2 ft), and is less than the size of the earlier mammoth species M. meridionalis and M. trogontherii, and the contemporary M. columbi. […] Woolly mammoths had several adaptations to the cold, most noticeably the layer of fur covering all parts of the body. Other adaptations to cold weather include ears that are far smaller than those of modern elephants […] The small ears reduced heat loss and frostbite, and the tail was short for the same reason […] They had a layer of fat up to 10 cm (3.9 in) thick under the skin, which helped to keep them warm. […] The coat consisted of an outer layer of long, coarse “guard hair”, which was 30 cm (12 in) on the upper part of the body, up to 90 cm (35 in) in length on the flanks and underside, and 0.5 mm (0.020 in) in diameter, and a denser inner layer of shorter, slightly curly under-wool, up to 8 cm (3.1 in) long and 0.05 mm (0.0020 in) in diameter. The hairs on the upper leg were up to 38 cm (15 in) long, and those of the feet were 15 cm (5.9 in) long, reaching the toes. The hairs on the head were relatively short, but longer on the underside and the sides of the trunk. The tail was extended by coarse hairs up to 60 cm (24 in) long, which were thicker than the guard hairs. It is likely that the woolly mammoth moulted seasonally, and that the heaviest fur was shed during spring.”

“Woolly mammoths had very long tusks, which were more curved than those of modern elephants. The largest known male tusk is 4.2 m (14 ft) long and weighs 91 kg (201 lb), but 2.4–2.7 m (7.9–8.9 ft) and 45 kg (99 lb) was a more typical size. Female tusks averaged at 1.5–1.8 m (4.9–5.9 ft) and weighed 9 kg (20 lb). About a quarter of the length was inside the sockets. The tusks grew spirally in opposite directions from the base and continued in a curve until the tips pointed towards each other. In this way, most of the weight would have been close to the skull, and there would be less torque than with straight tusks. The tusks were usually asymmetrical and showed considerable variation, with some tusks curving down instead of outwards and some being shorter due to breakage.”

“Woolly mammoths needed a varied diet to support their growth, like modern elephants. An adult of six tonnes would need to eat 180 kg (397 lb) daily, and may have foraged as long as twenty hours every day. […] Woolly mammoths continued growing past adulthood, like other elephants. Unfused limb bones show that males grew until they reached the age of 40, and females grew until they were 25. The frozen calf “Dima” was 90 cm (35 in) tall when it died at the age of 6–12 months. At this age, the second set of molars would be in the process of erupting, and the first set would be worn out at 18 months of age. The third set of molars lasted for ten years, and this process was repeated until the final, sixth set emerged when the animal was 30 years old. A woolly mammoth could probably reach the age of 60, like modern elephants of the same size. By then the last set of molars would be worn out, the animal would be unable to chew and feed, and it would die of starvation.[53]

“The habitat of the woolly mammoth is known as “mammoth steppe” or “tundra steppe”. This environment stretched across northern Asia, many parts of Europe, and the northern part of North America during the last ice age. It was similar to the grassy steppes of modern Russia, but the flora was more diverse, abundant, and grew faster. Grasses, sedges, shrubs, and herbaceous plants were present, and scattered trees were mainly found in southern regions. This habitat was not dominated by ice and snow, as is popularly believed, since these regions are thought to have been high-pressure areas at the time. The habitat of the woolly mammoth also supported other grazing herbivores such as the woolly rhinoceros, wild horses and bison. […] A 2008 study estimated that changes in climate shrank suitable mammoth habitat from 7,700,000 km2 (3,000,000 sq mi) 42,000 years ago to 800,000 km2 (310,000 sq mi) 6,000 years ago.[81][82] Woolly mammoths survived an even greater loss of habitat at the end of the Saale glaciation 125,000 years ago, and it is likely that humans hunted the remaining populations to extinction at the end of the last glacial period.[83][84] […] Several woolly mammoth specimens show evidence of being butchered by humans, which is indicated by breaks, cut-marks, and associated stone tools. It is not known how much prehistoric humans relied on woolly mammoth meat, since there were many other large herbivores available. Many mammoth carcasses may have been scavenged by humans rather than hunted. Some cave paintings show woolly mammoths in structures interpreted as pitfall traps. Few specimens show direct, unambiguous evidence of having been hunted by humans.”

“While frozen woolly mammoth carcasses had been excavated by Europeans as early as 1728, the first fully documented specimen was discovered near the delta of the Lena River in 1799 by Ossip Schumachov, a Siberian hunter.[90] Schumachov let it thaw until he could retrieve the tusks for sale to the ivory trade. [Aargh!] […] The 1901 excavation of the “Berezovka mammoth” is the best documented of the early finds. It was discovered by the Berezovka River, and the Russian authorities financed its excavation. Its head was exposed, and the flesh had been scavenged. The animal still had grass between its teeth and on the tongue, showing that it had died suddenly. […] By 1929, the remains of 34 mammoths with frozen soft tissues (skin, flesh, or organs) had been documented. Only four of them were relatively complete. Since then, about that many more have been found.”

ii. Daniel Lambert.

Daniel Lambert (13 March 1770 – 21 June 1809) was a gaol keeper[n 1] and animal breeder from Leicester, England, famous for his unusually large size. After serving four years as an apprentice at an engraving and die casting works in Birmingham, he returned to Leicester around 1788 and succeeded his father as keeper of Leicester’s gaol. […] At the time of Lambert’s return to Leicester, his weight began to increase steadily, even though he was athletically active and, by his own account, abstained from drinking alcohol and did not eat unusual amounts of food. In 1805, Lambert’s gaol closed. By this time, he weighed 50 stone (700 lb; 318 kg), and had become the heaviest authenticated person up to that point in recorded history. Unemployable and sensitive about his bulk, Lambert became a recluse.

In 1806, poverty forced Lambert to put himself on exhibition to raise money. In April 1806, he took up residence in London, charging spectators to enter his apartments to meet him. Visitors were impressed by his intelligence and personality, and visiting him became highly fashionable. After some months on public display, Lambert grew tired of exhibiting himself, and in September 1806, he returned, wealthy, to Leicester, where he bred sporting dogs and regularly attended sporting events. Between 1806 and 1809, he made a further series of short fundraising tours.

In June 1809, he died suddenly in Stamford. At the time of his death, he weighed 52 stone 11 lb (739 lb; 335 kg), and his coffin required 112 square feet (10.4 m2) of wood. Despite the coffin being built with wheels to allow easy transport, and a sloping approach being dug to the grave, it took 20 men almost half an hour to drag his casket into the trench, in a newly opened burial ground to the rear of St Martin’s Church.”

“Sensitive about his weight, Daniel Lambert refused to allow himself to be weighed, but sometime around 1805, some friends persuaded him to come with them to a cock fight in Loughborough. Once he had squeezed his way into their carriage, the rest of the party drove the carriage onto a large scale and jumped out. After deducting the weight of the (previously weighed) empty carriage, they calculated that Lambert’s weight was now 50 stone (700 lb; 318 kg), and that he had thus overtaken Edward Bright, the 616-pound (279 kg) “Fat Man of Maldon”,[23] as the heaviest authenticated person in recorded history.[20][24]

Despite his shyness, Lambert badly needed to earn money, and saw no alternative to putting himself on display, and charging his spectators.[20] On 4 April 1806, he boarded a specially built carriage and travelled from Leicester[26] to his new home at 53 Piccadilly, then near the western edge of London.[20] For five hours each day, he welcomed visitors into his home, charging each a shilling (about £3.5 as of 2014).[18][25] […] Lambert shared his interests and knowledge of sports, dogs and animal husbandry with London’s middle and upper classes,[27] and it soon became highly fashionable to visit him, or become his friend.[27] Many called repeatedly; one banker made 20 visits, paying the admission fee on each occasion.[17] […] His business venture was immediately successful, drawing around 400 paying visitors per day. […] People would travel long distances to see him (on one occasion, a party of 14 travelled to London from Guernsey),[n 5] and many would spend hours speaking with him on animal breeding.”

“After some months in London, Lambert was visited by Józef Boruwłaski, a 3-foot 3-inch (99 cm) dwarf then in his seventies.[44] Born in 1739 to a poor family in rural Pokuttya,[45] Boruwłaski was generally considered to be the last of Europe’s court dwarfs.[46] He was introduced to the Empress Maria Theresa in 1754,[47] and after a short time residing with deposed Polish king Stanisław Leszczyński,[44] he exhibited himself around Europe, thus becoming a wealthy man.[48] At age 60, he retired to Durham,[49] where he became such a popular figure that the City of Durham paid him to live there[50] and he became one of its most prominent citizens […] The meeting of Lambert and Boruwłaski, the largest and smallest men in the country,[51] was the subject of enormous public interest”

“There was no autopsy, and the cause of Lambert’s death is unknown.[65] While many sources say that he died of a fatty degeneration of the heart or of stress on his heart caused by his bulk, his behaviour in the period leading to his death does not match that of someone suffering from cardiac insufficiency; witnesses agree that on the morning of his death he appeared well, before he became short of breath and collapsed.[65] Bondeson (2006) speculates that the most consistent explanation of his death, given his symptoms and medical history, is that he had a sudden pulmonary embolism.[65]

iii. Geology of the Capitol Reef area.

Waterpocket_Fold_-_Looking_south_from_the_Strike_Valley_Overlook

“The exposed geology of the Capitol Reef area presents a record of mostly Mesozoic-aged sedimentation in an area of North America in and around Capitol Reef National Park, on the Colorado Plateau in southeastern Utah.

Nearly 10,000 feet (3,000 m) of sedimentary strata are found in the Capitol Reef area, representing nearly 200 million years of geologic history of the south-central part of the U.S. state of Utah. These rocks range in age from Permian (as old as 270 million years old) to Cretaceous (as young as 80 million years old.)[1] Rock layers in the area reveal ancient climates as varied as rivers and swamps (Chinle Formation), Sahara-like deserts (Navajo Sandstone), and shallow ocean (Mancos Shale).

The area’s first known sediments were laid down as a shallow sea invaded the land in the Permian. At first sandstone was deposited but limestone followed as the sea deepened. After the sea retreated in the Triassic, streams deposited silt before the area was uplifted and underwent erosion. Conglomerate followed by logs, sand, mud and wind-transported volcanic ash were later added. Mid to Late Triassic time saw increasing aridity, during which vast amounts of sandstone were laid down along with some deposits from slow-moving streams. As another sea started to return it periodically flooded the area and left evaporite deposits. Barrier islands, sand bars and later, tidal flats, contributed sand for sandstone, followed by cobbles for conglomerate and mud for shale. The sea retreated, leaving streams, lakes and swampy plains to become the resting place for sediments. Another sea, the Western Interior Seaway, returned in the Cretaceous and left more sandstone and shale only to disappear in the early Cenozoic.”

“The Laramide orogeny compacted the region from about 70 million to 50 million years ago and in the process created the Rocky Mountains. Many monoclines (a type of gentle upward fold in rock strata) were also formed by the deep compressive forces of the Laramide. One of those monoclines, called the Waterpocket Fold, is the major geographic feature of the park. The 100 mile (160 km) long fold has a north-south alignment with a steeply east-dipping side. The rock layers on the west side of the Waterpocket Fold have been lifted more than 7,000 feet (2,100 m) higher than the layers on the east.[23] Thus older rocks are exposed on the western part of the fold and younger rocks on the eastern part. This particular fold may have been created due to movement along a fault in the Precambrian basement rocks hidden well below any exposed formations. Small earthquakes centered below the fold in 1979 may be from such a fault.[24] […] Ten to fifteen million years ago the entire region was uplifted several thousand feet (well over a kilometer) by the creation of the Colorado Plateaus. This time the uplift was more even, leaving the overall orientation of the formations mostly intact. Most of the erosion that carved today’s landscape occurred after the uplift of the Colorado Plateau with much of the major canyon cutting probably occurring between 1 and 6 million years ago.”

iv. Problem of Apollonius.

“In Euclidean plane geometry, Apollonius’s problem is to construct circles that are tangent to three given circles in a plane (Figure 1).

396px-Apollonius_problem_typical_solution.svg

Apollonius of Perga (ca. 262 BC – ca. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, “Tangencies”); this work has been lost, but a 4th-century report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them […] and each solution circle encloses or excludes the three given circles in a different way […] The general statement of Apollonius’ problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size.[1][2][3][4] These objects may be arranged in any way and may cross one another; however, they are usually taken to be distinct, meaning that they do not coincide. Solutions to Apollonius’ problem are sometimes called Apollonius circles, although the term is also used for other types of circles associated with Apollonius. […] A rich repertoire of geometrical and algebraic methods have been developed to solve Apollonius’ problem,[9][10] which has been called “the most famous of all” geometry problems.[3]

v. Globular cluster.

“A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.

Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150[2] to 158[3] currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered.[4] Large galaxies can have more: Andromeda, for instance, may have as many as 500. […]

Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters.[8] The Sagittarius Dwarf galaxy and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way.[9] This demonstrates how many of this galaxy’s globular clusters might have been acquired in the past.

Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear.”

October 23, 2014 Posted by | Astronomy, Biology, Ecology, Geography, Geology, History, Mathematics, Paleontology, Wikipedia, Zoology | Leave a comment

Medical Statistics at a Glance

I wasn’t sure if I should blog this book or not, but in the end I decided to add a few observations here – you can read my goodreads review here.

Before I started reading the book I was considering whether it’d be worth it, as a book like this might have little to offer for someone with my background – I’ve had a few stats courses at this point, and it’s not like the specific topic of medical statistics is completely unknown to me; for example I read an epidemiology textbook just last year, and Hill and Glied and Smith covered related topics as well. It wasn’t that I thought there’s not a lot of medical statistics I don’t already know – there is – it was more of a concern that this specific (type of) book might not be the book to read if I wanted to learn a lot of new stuff in this area.

Disregarding the specific medical context of the book I knew a lot of stuff about many of the topics covered. To take an example, Bartholomew’s book devoted a lot of pages to the question of how to handle missing data in a sample, a question this book devotes 5 sentences to. There are a lot of details missing here and the coverage is not very deep. As I hint at in the goodreads review, I think the approach applied in the book is to some extent simply mistaken; I don’t think this (many chapters on different topics, each chapter 2-3 pages long) is a good way to write a statistics textbook. The many different chapters on a wide variety of topics give you the impression that the authors have tried to maximize the amount of people who might get something out of this book, which may have ended up meaning that few people will actually get much out of it. On the plus side there are illustrated examples of many of the statistical methods used in the book, and you also get (some of) the relevant formulas for calculating e.g. specific statistics – but you get little understanding of the details of why this works, when it doesn’t, and what happens when it doesn’t. I already mentioned Bartholomew’s book – many other textbooks written about topics which they manage to cover in their two- or three-page chapters could be mentioned as well – examples include publications such as this, this and this.

Given the way the book starts out (which different types of data exist? How do you calculate an average and what is a standard deviation?) I think the people most likely to be reading a book like this are people who have a very limited knowledge of statistics and data analysis – and when people like that read stats books, you need to be very careful with your wording and assumptions. Maybe I’m just a grumpy old man, but I’m not sure I think the authors are careful enough. A couple of examples:

“Statistical modelling includes the use of simple and multiple linear regression, polynomial regression, logistic regression and methods that deal with survival data. All these methods rely on generating the mathematical model that describes the relationship between two or more variables. In general, any model can be expressed in the form:

g(Y) = a + b1x1 + b2x2 + … + bkxk

where Y is the fitted value of the dependent variable, g(.) is some optional transformation of it (for example, the logit transformation), xl, . . . , xk are the predictor or explanatory variables”

(In case you were wondering, it took me 20 minutes to find out how to lower those 1’s and 2’s because it’s not a standard wordpress function and you need to really want to find out how to do this in order to do it. The k’s still look like crap, but I’m not going to spend more time trying to figure out how to make this look neat. I of course could not copy the book formula into the post, or I would have done that. As I’ve pointed out many times, it’s a nightmare to cover mathematical topics on a blog like this. Yeah, I know Terry Tao also blogs on wordpress, but presumably he writes his posts in a different program – I’m very much against the idea of doing this, even if I am sometimes – in situations like these – seriously reconsidering whether I should do that.)

Let’s look closer at this part again: “In general, any model can be expressed…”

This choice of words and the specific example is the sort of thing I have in mind. If you don’t know a lot about data analysis and you read a statement like this literally, which is the sort of thing I for one am wont to do, you’ll conclude that there’s no such thing as a model which is non-linear in its parameters. But there are a lot of models like that. Imprecise language like this can be incredibly frustrating because it will lead either to confusion later on, or, if people don’t read another book on any of these topics again, severe overconfidence and mistaken beliefs due to hidden assumptions.

Here’s another example from chapter 28, on ‘Performing a linear regression analysis’:

Checking the assumptions
For each observed value of x, the residual is the observed y minus the corresponding fitted Y. Each residual may be either positive or negative. We can use the residuals to check the following assumptions underlying linear regression.
1 There is a linear relationship between x and y: Either plot y against x (the data should approximate a straight line), or plot the residuals against x (we should observe a random scatter of points rather than any systematic pattern).
2 The observations are independent: the observations are independent if there is no more than one pair of observations on each individual.”

This is not good. Arguably the independence assumption is in some contexts best conceived of as an in practice untestable assumption, but regardless of whether it ‘really’ is or not there are a lot of ways in which this assumption may be violated, and observations not being derived from the same individual is not a sufficient requirement for establishing independence. Assuming otherwise is potentially really problematic.

Here’s another example:

Some words of comfort
Do not worry if you find the theory underlying probability distributions complex. Our experience demonstrates that you want to know only when and how to use these distributions. We have therefore outlined the essentials, and omitted the equations that define the probability distributions. You will find that you only need to be familiar with the basic ideas, the terminology and, perhaps (although infrequently in this computer age), know how to refer to the tables.”

I found this part problematic. If you want to do hypothesis testing using things like the Chi-squared distribution or the F-test (both ‘covered’, sort of, in the book), you need to be really careful about details like the relevant degrees of freedom and how these may depend on what you’re doing with the data, and stuff like this is sometimes not obvious – not even to people who’ve worked with the equations (well, sometimes it is obvious, but it’s easy to forget to correct for estimated parameters and you can’t always expect the program to do this for you, especially not in more complex model frameworks). My position is that if you’ve never even seen the relevant equations, you have no business conducting anything but the most basic of analyses involving these distributions. Of course a person who’s only read this book would not be able to do more than that, but even so instead of ‘some words of comfort’ I’d much rather have seen ‘some words of caution’.

One last one:

Error checking
* Categorical data – It is relatively easy to check categorical data, as the responses for each variable can only take one of a number of limited values. Therefore, values that are not allowable must be errors.”

Nothing else is said about error checking of categorical data in this specific context, so it would be natural to assume from reading this that if you simply check whether values are ‘allowable’ or not, this is sufficient to catch all the errors. But this is a completely uninformative statement, as a key term remains undefined – neglected is the question of how to define (observation-specific-) ‘allowability’ in the first place, which is the real issue; a proper error-finding algorithm should apply a precise and unambiguous definition of this term, and how to (/implicitly?) construct/apply such an algoritm is likely to sometimes be quite hard, especially when multiple categories are used and allowed and the category dimension in question is hard to cross-check against other variables. Reading the above sequence, it’d be easy for the reader to assume that this is all very simple and easy.

Oh well, all this said the book did had some good stuff as well. I’ve added some further comments and observations from the book below, with which I did not ‘disagree’ (to the extent that this is even possible). It should be noted that the book has a lot of focus on hypothesis testing and (/how to conduct) different statistical tests, and very little about statistical modelling. Many different tests are either mentioned and/or explicitly covered in the book, which aside from e.g. standard z-, t- and F-tests also include things like e.g. McNemar’s test, Bartlett’s test, the sign test, and the Wilcoxon rank-sum test, most of which were covered – I realized after having read the book – in the last part of the first statistics text I read, a part I was not required to study and so technically hadn’t read. So I did come across some new stuff while reading the book. Those specific parts were actually some of the parts of the book I liked best, because they contained stuff I didn’t already know, and not just stuff which I used to know but had forgot about. The few additional quotes added below do to some extent illustrate what the book is like, but it should also be kept in mind that they’re perhaps also not completely ‘fair’, in a way, in terms of providing a balanced and representative sample of the kind of stuff included in the publication; there are many (but perhaps not enough..) equations along the way (which I’m not going to blog, for reasons already mentioned), and the book includes detailed explanations and illustrations of how to conduct specific tests – it’s quite ‘hands-on’ in some respects, and a lot of tools will be added to the toolbox of someone who’s not read a similar publication before.

“Generally, we make comparisons between individuals in different groups. For example, most clinical trials (Topic 14) are parallel trials, in which each patient receives one of the two (or occasionally more) treatments that are being compared, i.e. they result in between-individual comparisons.
Because there is usually less variation in a measurement within an individual than between different individuals (Topic 6), in some situations it may be preferable to consider using each individual as hidher own control. These within-individual comparisons provide more precise comparisons than those from between-individual designs, and fewer individuals are required for the study to achieve the same level of precision. In a clinical trial setting, the crossover design[1] is an example of a within-individual comparison; if there are two treatments, every individual gets each treatment, one after the other in a random order to eliminate any effect of calendar time. The treatment periods are separated by a washout period, which allows any residual effects (carry-over) of the previous treatment to dissipate. We analyse the difference in the responses on the two treatments for each individual. This design can only be used when the treatment temporarily alleviates symptoms rather than provides a cure, and the response time is not prolonged.”

“A cohort study takes a group of individuals and usually follows them forward in time, the aim being to study whether exposure to a particular aetiological factor will affect the incidence of a disease outcome in the future […]

Advantages of cohort studies
*The time sequence of events can be assessed.
*They can provide information on a wide range of outcomes.
*It is possible to measure the incidence/risk of disease directly.
*It is possible to collect very detailed information on exposure to a wide range of factors.
*It is possible to study exposure to factors that are rare.
*Exposure can be measured at a number of time points, so that changes in exposure over time can be studied. There is reduced recall and selection bias compared with case-control studies (Topic 16).

Disadvantages of cohort studies
*In general, cohort studies follow individuals for long periods of time, and are therefore costly to perform.
*Where the outcome of interest is rare, a very large sample size is needed.
*As follow-up increases, there is often increased loss of patients as they migrate or leave the study, leading to biased results. *As a consequence of the long time-scale, it is often difficult to maintain consistency of measurements and outcomes over time. […]
*It is possible that disease outcomes and their probabilities, or the aetiology of disease itself, may change over time.”

“A case-control study compares the characteristics of a group of patients with a particular disease outcome (the cases) to a group of individuals without a disease outcome (the controls), to see whether any factors occurred more or less frequently in the cases than the controls […] Many case-control studies are matched in order to select cases and controls who are as similar as possible. In general, it is useful to sex-match individuals (i.e. if the case is male, the control should also be male), and, sometimes, patients will be age-matched. However, it is important not to match on the basis of the risk factor of interest, or on any factor that falls within the causal pathway of the disease, as this will remove the ability of the study to assess any relationship between the risk factor and the disease. Unfortunately, matching [means] that the effect on disease of the variables that have been used for matching cannot be studied.”

Advantages of case-control studies
“quick, cheap and easy […] particularly suitable for rare diseases. […] A wide range of risk factors can be investigated. […] no loss to follow-up.
Disadvantages of case-control studies
Recall bias, when cases have a differential ability to remember certain details about their histories, is a potential problem. For example, a lung cancer patient may well remember the occasional period when he/she smoked, whereas a control may not remember a similar period. […] If the onset of disease preceded exposure to the risk factor, causation cannot be inferred. […] Case-control studies are not suitable when exposures to the risk factor are rare.”

The P-value is the probability of obtaining our results, or something more extreme, if the null hypothesis is true. The null hypothesis relates to the population of interest, rather than the sample. Therefore, the null hypothesis is either true or false and we cannot interpret the P-value as the probability that the null hypothesis is true.”

“Hypothesis tests which are based on knowledge of the probability distributions that the data follow are known as parametric tests. Often data do not conform to the assumptions that underly these methods (Topic 32). In these instances we can use non-parametric tests (sometimes referred to as distribution-free tests, or rank methods). […] Non-parametric tests are particularly useful when the sample size is small […], and when the data are measured on a categorical scale. However, non-parametric tests are generally wasteful of information; consequently they have less power […] A number of factors have a direct bearing on power for a given test.
*The sample size: power increases with increasing sample size. […]
*The variability of the observations: power increases as the variability of the observations decreases […]
*The effect of interest: the power of the test is greater for larger effects. A hypothesis test thus has a greater chance of detecting a large real effect than a small one.
*The significance level: the power is greater if the significance level is larger”

“The statistical use of the word ‘regression’ derives from a phenomenon known as regression to the mean, attributed to Sir Francis Galton in 1889. He demonstrated that although tall fathers tend to have tall sons, the average height of the sons is less than that of their tall fathers. The average height of the sons has ‘regressed’ or ‘gone back’ towards the mean height of all the fathers in the population. So, on average, tall fathers have shorter (but still tall) sons and short fathers have taller (but still short) sons.
We observe regression to the mean in screening and in clinical trials, when a subgroup of patients may be selected for treatment because their levels of a certain variable, say cholesterol, are extremely high (or low). If the measurement is repeated some time later, the average value for the second reading for the subgroup is usually less than that of the first reading, tending towards (i.e. regressing to) the average of the age- and sex-matched population, irrespective of any treatment they may have received. Patients recruited into a clinical trial on the basis of a high cholesterol level on their first examination are thus likely to show a drop in cholesterol levels on average at their second examination, even if they remain untreated during this period.”

“A systematic review[1] is a formalized and stringent process of combining the information from all relevant studies (both published and unpublished) of the same health condition; these studies are usually clinical trials […] of the same or similar treatments but may be observational studies […] a meta-analysis, because of its inflated sample size, is able to detect treatment effects with greater power and estimate these effects with greater precision than any single study. Its advantages, together with the introduction of meta-analysis software, have led meta-analyses to proliferate. However, improper use can lead to erroneous conclusions regarding treatment efficacy. The following principal problems should be thoroughly investigated and resolved before a meta-analysis is performed.
*Publication bias – the tendency to include in the analysis only the results from published papers; these favour statistically significant findings.
*Clinical heterogeneity – in which differences in the patient population, outcome measures, definition of variables, and/or duration of follow-up of the studies included in the analysis create problems of non-compatibility.
*Quality differences – the design and conduct of the studies may vary in their quality. Although giving more weight to the better studies is one solution to this dilemma, any weighting system can be criticized on the grounds that it is arbitrary.
*Dependence – the results from studies included in the analysis may not be independent, e.g. when results from a study are published on more than one occasion.”

October 21, 2014 Posted by | Books, Epidemiology, Medicine, Statistics | Leave a comment

Ecological Dynamics (I?)

“Mathematical models underpin much ecological theory, […] [y]et most students of ecology and environmental science receive much less formal training in mathematics than their counterparts in other scientific disciplines. Motivating both graduate and undergraduate students to study ecological dynamics thus requires an introduction which is initially accessible with limited mathematical and computational skill, and yet offers glimpses of the state of the art in at least some areas. This volume represents our attempt to reconcile these conflicting demands […] Ecology is the branch of biology that deals with the interaction of living organisms with their environment. […] The primary aim of this book is to develop general theory for describing ecological dynamics. Given this aspiration, it is useful to identify questions that will be relevant to a wide range of organisms and/or habitats. We shall distinguish questions relating to individuals, populations, communities, and ecosystems. A population is all the organisms of a particular species in a given region. A community is all the populations in a given region. An ecosystem is a community related to its physical and chemical environment. […] Just as the physical and chemical properties of materials are the result of interactions involving individual atoms and molecules, so the dynamics of populations and communities can be interpreted as the combined effects of properties of many individuals […] All models are (at best) approximations to the truth so, given data of sufficient quality and diversity, all models will turn out to be false. The key to understanding the role of models in most ecological applications is to recognise that models exist to answer questions. A model may provide a good description of nature in one context but be woefully inadequate in another. […] Ecology is no different from other disciplines in its reliance on simple models to underpin understanding of complex phenomena. […] the real world, with all its complexity, is initially interpreted through comparison with the simplistic situations described by the models. The inevitable deviations from the model predictions [then] become the starting point for the development of more specific theory.”

I haven’t blogged this book yet even if it’s been a while since I finished it, and I figured I ought to talk a little bit about it now. As pointed out on goodreads, I really liked the book. It’s basically a math textbook for biologists which deals with how to set up models in a specific context, that dealing with questions pertaining to ecological dynamics; having read the above quote you should at this point at least have some idea which kind of stuff this field deals with. Here are a few links to examples of applications mentioned/covered in the book which may give you a better idea of the kinds of things covered.

There are 9 chapters in the book, and only the introductory chapter has fewer than 50 ‘named’ equations – most have around 70-80 equations, and 3 of them have more than 100. I have tried to avoid equations in this post in part because it’s hell to deal with them in wordpress, so I’ll be leaving out a lot of stuff in my coverage. Large chunks of the coverage was to some extent review but there was also some new stuff in there. The book covers material both intended for undergraduates and graduates, and even if the book is presumably intended for biology majors many of the ideas also can be ‘transferred’ to other contexts where the same types of specific modelling frameworks might be applied; for example there are some differences between discrete-time models and continuous-time models, and those differences apply regardless of whether you’re modelling animal behaviour or, say, human behaviour. A local stability analysis looks quite similar in the contexts of an economic model and an ecological model. Etc. I’ve tried to mostly talk about rather ‘general stuff’ in this coverage, i.e. model concepts and key ideas covered in the book which might also be applicable in other fields of research as well. I’ve tried to keep things reasonably simple in this post, and I’ve only talked about stuff from the first three chapters.

“The simplest ecological models, called deterministic models, make the assumption that if we know the present condition of a system, we can predict its future. Before we can begin to formulate such a model, we must decide what quantities, known as state variables, we shall use to describe the current condition of the system. This choice always involves a subtle balance of biological realism (or at least plausibility) against mathematical complexity. […] The first requirement  in formulating a usable model is […] to decide which characteristics are dynamically important in the context of the questions the model seeks to answer. […] The diversity of individual characteristics and behaviours implies that without considerable effort at simplification, a change of focus towards communities will be accompanied by an explosive increase in model complexity. […] A dynamical model is a mathematical statement of the rules governing change. The majority of models express these rules either as an update rule, specifying the relationship between the current and future state of the system, or as a differential equation, specifying the rate of change of the state variables. […] A system with [the] property [that the update rule does not depend on time] is said to be autonomous. […] [If the update rule depends on time, the models are called non-autonomous].”

“Formulation of a dynamic model always starts by identifying the fundamental processes in the system under investigation and then setting out, in mathematical language, the statement that changes in system state can only result from the operation of these processes. The “bookkeeping” framework which expresses this insight is often called a conservation equation or a balance equation. […] Writing down balance equations is just the first step in formulating an ecological model, since only in the most restrictive circumstances do balance equations on their own contain enough information to allow prediction of future values of state variables. In general, [deterministic] model formulation involves three distinct steps: *choose state variables, *derive balance equations, *make model-specific assumptions.
Selection of state variables involves biological or ecological judgment […] Deriving balance equations involves both ecological choices (what processes to include) and mathematical reasoning. The final step, the selection of assumptions particular to any one model, is left to last in order to facilitate model refinement. For example, if a model makes predictions that are at variance with observation, we may wish to change one of the model assumptions, while still retaining the same state variables and processes in the balance equations. […] a remarkably good approximation to […] stochastic dynamics is often obtained by regarding the dynamics as ‘perturbations’ of a non-autonomous, deterministic system. […] although randomness is ubiquitous, deterministic models are an appropriate starting point for much ecological modelling. […] even where deterministic models are inadequate, an essential prerequisite to the formulation and analysis of many complex, stochastic models is a good understanding of a deterministic representation of the system under investigation.”

“Faced with an update rule or a balance equation describing an ecological system, what do we do? The most obvious line of attack is to attempt to find an analytical solution […] However, except for the simplest models, analytical solutions tend to be impossible to derive or to involve formulae so complex as to be completely unhelpful. In other situations, an explicit solution can be calculated numerically. A numerical solution of a difference equation is a table of values of the state variable (or variables) at successive time steps, obtained by repeated application of the update rule […] Numerical solutions of differential equations are more tricky [but sophisticated methods for finding them do exist] […] for simple systems it is possible to obtain considerable insight by ‘numerical experiments’ involving solutions for a number of parameter values and/or initial conditions. For more complex models, numerical analysis is typically the only approach available. But the unpleasant reality is that in the vast majority of investigations it proves impossible to obtain complete or near-complete information about a dynamical system, either by deriving analytical solutions or by numerical experimentation. It is therefore reassuring that over the past century or so, mathematicians have developed methods of determining the qualitative properties of the solutions of dynamic equations, and thus answering many questions […] without explicitly solving the equations concerned.”

“[If] the long-term behaviour of the state variable is independent of the initial condition […] the ‘end state’ […] is known as an attractor. […] Equilibrium states need not be attractors; they can be repellers [as well] […] if a dynamical system has an equilibrium state, any initial condition other than the exact equilibrium value may lead to the state variable converging towards the equilibrium or diverging away from it. We characterize such equilibria as stable and unstable respectively. In some models all initial conditions result in the state variable eventually converging towards a single equilibrium value. We characterize such equilibria as globally stable. An equilibrium that is approached only from a subset of all possible initial conditions (often those close to the equilibrium itself) is said to be locally stable. […] The combination of non-periodic solutions and sensitive dependence on initial conditions is the signature of the pattern of behaviour known to mathematicians as chaos.

“Most variables and parameters in models have units. […] However, the behaviour of a natural system cannot be affected by the units in which we chose to measure the quantities we use to describe it. This implies that it should be possible to write down the defining equations of a model in a form independent of the units we use. For any dynamical equation to be valid, the quantities being equated must be measured in the same units. How then do we restate such an equation in a form which is unaffected by our choice of units? The answer lies in identifying a natural scale or base unit for each quantity in the equations and then using the ratio of each variable to its natural scale in our dynamic description. Since such ratios are pure numbers, we say that they are dimensionless. If a dynamic equation couched in terms of dimensionless variables is to be valid, then both sides of any equality must likewise be dimensionless. […] the process of non-dimensionalisation, which we call dimensional analysis, can […] yield information on system dynamics. […] Since there is no unique dimensionless form for any set of dynamical equations, it is tempting to cut short the scaling process by ‘setting some parameter(s) equal to one’. Even experienced modellers make embarrasing blunders doing this, and we strongly recommend a systematic […] approach […] The key element in the scaling process is the selection of appropriate base units – the optimal choice being dependent on the questions motivating our study.”

“The starting point for selecting the appropriate formalism [in the context of the time dimension] must […] be recognition that real ecological processes operate in continuous time. Discrete-time models make some approximation to the outcome of these processes over a finite time interval, and should thus be interpreted with care. This caution is particularly important as difference equations are intuitively appealing and computationally simple. […] incautious empirical modelling with difference equations can have surprising (adverse) consequences. […] where the time increment of a discrete-time model is an arbitrary modelling choice, model predictions should be shown to be robust against changes in the value chosen.”

“Of the almost limitless range of relations between population flux and local density, we shall discuss only two extreme possibilities. Advection occurs when an external physical flow (such as an ocean current) transports all the members of the population past the point, x [in a spatially one-dimensional model], with essentially the same velocity, v. […] Diffusion occurs when the members of the population move at random. […] This leads to a net flow rate which is proportional to the spatial gradient of population density, with a constant of proportionality D, which we call the diffusion constant. […] the net flow [in this case] takes individuals from regions of high density to regions of low density” […] […some remarks about reaction-diffusion models, which I’d initially thought I’d cover here but which turned out to be too much work to deal with (the coverage is highly context-dependent)].

October 19, 2014 Posted by | Biology, Books, Ecology, Mathematics | Leave a comment

Unobserved Variables – Models and Misunderstandings

This is a neat little book in the Springer Briefs in Statistics series. The author is David J Bartholomew, a former statistics professor at the LSE. I wrote a brief goodreads review, but I thought that I might as well also add a post about the book here. The book covers topics such as the EM algorithm, Gibbs sampling, the Metropolis–Hastings algorithm and the Rasch model, and it assumes you’re familiar with stuff like how to do ML estimation, among many other things. I had some passing familiarity with many of the topics he talks about in the book, but I’m sure I’d have benefited from knowing more about some of the specific topics covered. Because large parts of the book is basically unreadable by people without a stats background I wasn’t sure how much of it it made sense to cover here, but I decided to talk a bit about a few of the things which I believe don’t require you to know a whole lot about this area.

“Modern statistics is built on the idea of models—probability models in particular. [While I was rereading this part, I was reminded of this quote which I came across while finishing my most recent quotes post: “No scientist is as model minded as is the statistician; in no other branch of science is the word model as often and consciously used as in statistics.” Hans Freudenthal.] The standard approach to any new problem is to identify the sources of variation, to describe those sources by probability distributions and then to use the model thus created to estimate, predict or test hypotheses about the undetermined parts of that model. […] A statistical model involves the identification of those elements of our problem which are subject to uncontrolled variation and a specification of that variation in terms of probability distributions. Therein lies the strength of the statistical approach and the source of many misunderstandings. Paradoxically, misunderstandings arise both from the lack of an adequate model and from over reliance on a model. […] At one level is the failure to recognise that there are many aspects of a model which cannot be tested empirically. At a higher level is the failure is to recognise that any model is, necessarily, an assumption in itself. The model is not the real world itself but a representation of that world as perceived by ourselves. This point is emphasised when, as may easily happen, two or more models make exactly the same predictions about the data. Even worse, two models may make predictions which are so close that no data we are ever likely to have can ever distinguish between them. […] All model-dependant inference is necessarily conditional on the model. This stricture needs, especially, to be borne in mind when using Bayesian methods. Such methods are totally model-dependent and thus all are vulnerable to this criticism. The problem can apparently be circumvented, of course, by embedding the model in a larger model in which any uncertainties are, themselves, expressed in probability distributions. However, in doing this we are embarking on a potentially infinite regress which quickly gets lost in a fog of uncertainty.”

“Mixtures of distributions play a fundamental role in the study of unobserved variables […] The two important questions which arise in the analysis of mixtures concern how to identify whether or not a given distribution could be a mixture and, if so, to estimate the components. […] Mixtures arise in practice because of failure to recognise that samples are drawn from several populations. If, for example, we measure the heights of men and women without distinction the overall distribution will be a mixture. It is relevant to know this because women tend to be shorter than men. […] It is often not at all obvious whether a given distribution could be a mixture […] even a two-component mixture of normals, has 5 unknown parameters. As further components are added the estimation problems become formidable. If there are many components, separation may be difficult or impossible […] [To add to the problem,] the form of the distribution is unaffected by the mixing [in the case of the mixing of normals]. Thus there is no way that we can recognise that mixing has taken place by inspecting the form of the resulting distribution alone. Any given normal distribution could have arisen naturally or be the result of normal mixing […] if f(x) is normal, there is no way of knowing whether it is the result of mixing and hence, if it is, what the mixing distribution might be.”

“Even if there is close agreement between a model and the data it does not follow that the model provides a true account of how the data arose. It may be that several models explain the data equally well. When this happens there is said to be a lack of identifiability. Failure to take full account of this fact, especially in the social sciences, has led to many over-confident claims about the nature of social reality. Lack of identifiability within a class of models may arise because different values of their parameters provide equally good fits. Or, more seriously, models with quite different characteristics may make identical predictions. […] If we start with a model we can predict, albeit uncertainly, what data it should generate. But if we are given a set of data we cannot necessarily infer that it was generated by a particular model. In some cases it may, of course, be possible to achieve identifiability by increasing the sample size but there are cases in which, no matter how large the sample size, no separation is possible. […] Identifiability matters can be considered under three headings. First there is lack of parameter identifiability which is the most common use of the term. This refers to the situation where there is more than one value of a parameter in a given model each of which gives an equally good account of the data. […] Secondly there is what we shall call lack of model identifiability which occurs when two or more models make exactly the same data predictions. […] The third type of identifiability is actually the combination of the foregoing types.

Mathematical statistics is not well-equipped to cope with situations where models are practically, but not precisely, indistinguishable because it typically deals with things which can only be expressed in unambiguously stated theorems. Of necessity, these make clear-cut distinctions which do not always correspond with practical realities. For example, there are theorems concerning such things as sufficiency and admissibility. According to such theorems, for example, a proposed statistic is either sufficient or not sufficient for some parameter. If it is sufficient it contains all the information, in a precisely defined sense, about that parameter. But in practice we may be much more interested in what we might call ‘near sufficiency’ in some more vaguely defined sense. Because we cannot give a precise mathematical definition to what we mean by this, the practical importance of the notion is easily overlooked. The same kind of fuzziness arises with what are called structural eqation models (or structural relations models) which have played a very important role in the social sciences. […] we shall argue that structural equation models are almost always unidentifiable in the broader sense of which we are speaking here. […] [our results] constitute a formidable argument against the careless use of structural relations models. […] In brief, the valid use of a structural equations model requires us to lean very heavily upon assumptions about which we may not be very sure. It is undoubtedly true that if such a model provides a good fit to the data, then it provides a possible account of how the data might have arisen. It says nothing about what other models might provide an equally good, or even better fit. As a tool of inductive inference designed to tell us something about the social world, linear structural relations modelling has very little to offer.”

“It is very common for data to be missing and this introduces a risk of bias if inferences are drawn from incomplete samples. However, we are not usually interested in the missing data themselves but in the population characteristics to whose estimation those values were intended to contribute. […] A very longstanding way of dealing with missing data is to fill in the gaps by some means or other and then carry out the standard analysis on the completed data set. This procedure is known as imputation. […] In its simplest form, each missing data point is replaced by a single value. Because there is, inevitably, uncertainty about what the imputed values should be, one can do better by substituting a range of plausible values and comparing the results in each case. This is known as multiple imputation. […] missing values may occur anywhere and in any number. They may occur haphazardly or in some pattern. In the latter case, the pattern may provide a clue to the mechanism underlying the loss of data and so suggest a method for dealing with it. The conditional distribution which we have supposed might be the basis of imputation depends, of course, on the mechanism behind the loss of data. From a practical point of view the detailed information necessary to determine this may not be readily obtainable or, even, necessary. Nevertheless, it is useful to clarify some of the issues by introducing the idea of a probability mechanism governing the loss of data. This will enable us to classify the problems which would have to be faced in a more comprehensive treatment. The simplest, if least realistic approach, is to assume that the chance of being missing is the same for all elements of the data matrix. In that case, we can, in effect, ignore the missing values […] Such situations are designated as MCAR which is an acronym for Missing Completely at Random. […] In the smoking example we have supposed that men are more likely to refuse [to answer] than women. If we go further and assume that there are no other biasing factors we are, in effect, assuming that ‘missingness’ is completely at random for men and women, separately. This would be an example of what is known as Missing at Random(MAR) […] which means that the missing mechanism depends on the observed variables but not on those that are missing. The final category is Missing Not at Random (MNAR) which is a residual category covering all other possibilities. This is difficult to deal with in practice unless one has an unusually complete knowledge of the missing mechanism.

Another term used in the theory of missing data is that of ignorability. The conditional distribution of y given x will, in general, depend on any parameters of the distribution of M [the variable we use to describe the mechanism governing the loss of observations] yet these are unlikely to be of any practical interest. It would be convenient if this distribution could be ignored for the purposes of inference about the parameters of the distribution of x. If this is the case the mechanism of loss is said to be ignorable. In practice it is acceptable to assume that the concept of ignorability is equivalent to that of MAR.”

 

October 16, 2014 Posted by | Books, Econometrics, Economics, Statistics | Leave a comment

Quotes

i. “A slave dreams of freedom, a free man dreams of wealth, the wealthy dream of power, and the powerful dream of freedom.” (Andrzej Majewski)

ii. “The tragedy of a thoughtless man is not that he doesn’t think, but that he thinks that he’s thinking.” (-ll-)

iii. “Money is the necessity that frees us from necessity.” (W. H. Auden)

iv. “Young people, who are still uncertain of their identity, often try on a succession of masks in the hope of finding the one which suits them — the one, in fact, which is not a mask.” (-ll-)

v. “The aphorist does not argue or explain, he asserts; and implicit in his assertion is a conviction that he is wiser and more intelligent than his readers.” (-ll-)

vi. “none become at once completely vile.” (William Gifford)

vii. “It is by a wise economy of nature that those who suffer without change, and whom no one can help, become uninteresting. Yet so it may happen that those who need sympathy the most often attract it the least.” (F. H. Bradley)

viii. “He who has imagination without learning has wings but no feet.” (Joseph Joubert)

ix. “It is better to debate a question without settling it than to settle a question without debating it.” (-ll-)

x. “The aim of an argument or discussion should not be victory, but progress.” (-ll-)

xi. “Are you listening to the ones who keep quiet?” (-ll-)

xii. “Writing is closer to thinking than to speaking.” (-ll-)

xiii. “Misery is almost always the result of thinking.” (-ll-)

xiv. “The great inconvenience of new books is that they prevent us from reading the old ones.” (-ll-)

xv. “A good listener is one who helps us overhear ourselves.” (Yahia Lababidi)

xvi. “To suppose, as we all suppose, that we could be rich and not behave as the rich behave, is like supposing that we could drink all day and keep absolutely sober.” (Logan Pearsall Smith)

xvii. “People say that life is the thing, but I prefer reading.” (-ll-)

xviii. “Most men give advice by the bucket, but take it by the grain.” (William Alger)

xix. “There is an instinct that leads a listener to be very sparing of credence when a fact is communicated […] But give him a fable fresh from the mint of the Mendacity Society […] and he will not only make affidavit of its truth, but will call any man out who ventures to dispute its authenticity.” (Samuel Blanchard)

xx. “Experience leaves fools as foolish as ever.” (-ll-)

 

 

October 15, 2014 Posted by | Quotes/aphorisms | Leave a comment

Sexual Selection in Primates – New and comparative perspectives (II)

You can read my first post about the book here. Let’s talk some more about what we can learn from this publication…

“In a variety of mammals and a few birds, newly immigrated or newly dominant males are known to attack and kill dependent infants […]. Hrdy (1974) was the first to suggest that this bizarre behaviour was the product of sexual selection: by killing infants they did not sire, these males advanced the timing of the mother’s next oestrus and, owing to their new social position, would have a reasonable probability of siring this female’s next infant. […] Although this interpretation, and indeed the phenomenon itself, has been hotly debated for decades […], on balance, this hypothesis provides a far better fit with the observations on primates than any of the alternatives […] several large-scale studies have estimated that the time gained by the infanticidal male amounts to [25-32] per cent of the mean interbirth interval […] Because males rarely, if ever, suffer injuries during infanticidal attacks, and because there is no evidence that committing infanticide leads to reduced tenure length, one can safely conclude that, on average, infanticide is an adaptive male strategy. […] Infanticide often happens when the former dominant male, the most likely sire of most infants even in multi-male groups […], is eliminated or incapacitated. […] dominant males are effective protectors of infants as long as they are not ousted or incapacitated.”

“Conceptually, we can distinguish two kinds of mating by females that may reduce the risk of infanticide. First, by mating polyandrously in potentially fertile periods, females can reduce the concentration of paternity in the dominant male, and spread some of it to other males, so that long-term average paternity probabilities will be somewhat below 1 for the dominant male and somewhat above 0 for the subordinates. Second, by mating during periods of non-fertility […], a female may be able to manipulate the assessment by the various males of their paternity chances, although she obviously cannot change the actual paternity values allocated to the various males. […] The basic prediction is that females that are vulnerable to infanticide by males should be actively polyandrous whenever potentially infanticidal males are present in the mating pool (i.e. the sexually mature males in the social unit or nearby with which the female can mate, in principle). There is ample evidence that primate females in vulnerable species actively pursue polyandrous matings and that they often engage in matings when fertilisation is unlikely or impossible […]. Indeed, females often target low-ranking or peripheral males reluctant to mate in the presence of the dominant males, especially during pregnancy. […] In species vulnerable to infanticide, females often respond to changes in the male cohort of a group with immediate proceptivity, and effectively solicit matings with the new (or newly dominant) male […] It is in the female’s interest to keep individual males guessing as to the extent to which other males have also mated with her […] Hence, females should be likely to mate discreetly, especially with subordinate males. […] We [expect] that matings between females and subordinate males tend to take place out of sight of the dominant male, e.g. at the periphery and away from the group […] it has been noted for several species that matings between females and subordinate males [do] tend to occur rather surreptuously”

“Even though most primates have concealed ovulations, there is evidence that they use various pre-copulatory mechanisms, such as friendships […] or increased proximity […] with favoured males, copulation calls that are likely to attract particular males […], active solicitation of copulations around the likely conception date […], as well as changes in chemical signals […]; unique vocalizations […]; sexual swellings […] and increased frequencies of particular behaviour patterns during the peri-ovulatory phase […] to signal impending ovulation and/or to increase the chances of fertilization by favoured males.” [Recall from the previous post also in this context that which males are actually ‘favoured’ changes significantly during the cycle].

“Thornhill (1983) suggested that females might exhibit what he called ‘cryptic female choice’ – the differential utilisation of sperm from different males. The term ‘cryptic’ referred to the fact that this choice took place out of sight, inside the female reproductive tract. […] Cryptic female choice is difficult to demonstrate [as] one has to control for all male effects, such as sperm numbers or differential fertilising ability […] Cryptic female choice in primates is poorly documented, even though there are theoretical reasons to expect it to be common. […] The strongest indirect evidence for a mechanism of cryptic female choice in primates is provided by the observation that females of several species of anthropoids (mostly macaques, baboons and chimpanzees) exhibit orgasm […] Physiological measures during artificially induced orgasms [have] demonstrated the occurence of the same vaginal and uterine contractions that also characterise human orgasm […] and are thought to accelerate and facilitate sperm transport towards the cervix and ovaries […] female orgasm was observed more often in macaque pairs including high-ranking males (Troisi & Carosi, 1998). A comparable effect of male social status on female orgasm rates has also been reported for humans […]. Orgasm therefore has the potential to be used selectively by females to facilitate fertilisation of their eggs by particular males […] This hypothesis is indirectly supported by the observation that female orgasm apparently does not occur among prosimians […], but rather among Old World primates, where the potential for coercive matings by multiple males is highest […]. Seen this way, female primate orgasm may therefore represent an evolutionary response to male sexual coercion that provided females with an edge in the dynamic competition over the control of fertilisation” [Miller’s account/explanation was quite different. I think both explanations are rather speculative at this point. Speculative, but interesting.]

“It has long been an established fact in ethology that interactions with social partners influence an individual’s motivational state and vice versa, and, through interactions, its physiological development and condition. For example, the suppression of reproductive processes by the presence of a same-sex conspecific has been documented for many species, including primates. […] The existence of a conditional [male mating] strategy with different tactics has been demonstrated in several species of mammals. To mention but one clear example: in savannah baboons, a male may decide what tactic to follow in its relationships with females after assessing what others do. Smuts (1985) has shown that dominant males follow a sexual tactic in which they monopolise access to fertile females by contest competition. A subordinate male may use another tactic. He may persuade a female to choose him for mating by rendering services to the female (e.g. protecting her in between-female competition) and thus forming a ‘friendship’ with the female. Similar variation in tactics has been found in other primates (e.g. in rhesus macaques, Berard et al., 1994).”

And there you probably have at least part of the explanation for why millions of romantically frustrated (…’pathetic’?) human males waste significant parts of their (reproductive) lives catering to the needs of women who already have a sexual partner and are not sexually interested in them – they might not even have been born were it not for the successful application of this type of sit-and-wait strategy on part of some of their ancestors in the past.

The chapter in question has a lot of stuff about male orangutans, and although it’s quite interesting I won’t go much into the details here. I should note however that I think most females will probably prefer the above-mentioned ‘sneaky’ male tactic (I should perhaps note here that in terms of the ‘sneakiness’ of mating strategies, females do pretty well for themselves as well. Indeed in the specific setting it’s not unlikely that it’s actually the females who initiate in a substantial number of cases – see above..) to the mating tactic of unflanged orangutans, which basically amounts to walking around looking for a female unprotected by a flanged male and then raping her when he comes across one. In one sample included in the book of orangutan matings taking place in Tanjung Puting national park (Indonesia), of roughly 20 matings by unflanged males recorded only 1 or 2 (it’s a bar graph) did not involve a female resisting. These guys are great, and apparently really sexy to the opposite gender… The ratio of resisting/not resisting females in the case of the matings involving flanged males was pretty much the reverse; a couple of rapes and ~18-19 unforced mating events. It should be noted that the number of matings achieved by the flanged and unflanged males is roughly similar, so judging from these data approximately half of all matings these female orangutans experience during their lives are forced.

“Especially in long-lived organisms such as primates, a male’s success in competing for mates and protecting his offspring should be affected by the nature of major social decisions, such as whether and when to transfer to other groups or to challenge dominants. Several studies indicate dependence of male decisions about transfer and acquisition of rank on age and local demography […]. Likewise, our work on male long-tailed macaques […] indicated a remarkably tight fit between the behavioural decisions of males and expectations based on known determinants of success […], suggesting that natural selection has endowed males with rules that, on average, produce optimal life-history trajectories (or careers) for a given set of conditions. […] Most non-human primates live in groups with continuous male-female association [“Only a minority of about 10 per cent of primate species live in pairs” – from a previous chapter], in which group membership of reproductively active (usually non-natal) males can last many years. For a male living in such a mixed-sex group, dominance rank reflects his relative power in excluding others from resources. However, the impact of dominance on mating success is variable […] Although rank acquisition is usually considered separately from transfer behaviour and mating success, the hypothesis examined here is that they are interdependent […]. We predict that the degree of paternity concentration in the dominant male, determined by his ability to exclude other males from mating, determines the relative benefits of various modes of acquisition of top rank […], and that these together determine patterns of male transfer”

“the cost of inbreeding may cause females to avoid mating with male relatives […]. This tendency has been invoked to explain an apparent female preference for novel (recently immigrated) males”

“a male can attain top rank in a mixed-sex group in three different ways. First, he can defeat the current dominant male during an aggressive challenge […] Second, he can attain top rank during the formation of a new group[…] A third way to achieve top rank is by default, or through ‘succession’, after the departure or death of the previous top-ranking male, not preceded by challenges from other males”

The chapter which included the above quotes is quite interesting, but in a way also difficult to quote from given the way it is written. They talk about multiple variables which may affect how likely a male is to leave the group in which he was born (for example if there are fewer females in the group, all else equal he’s more likely to leave); which mechanism he’s likely to employ in order to try to achieve top rank in his group, if that’s indeed an option (in small groups they always fight for the top spot and the dominant male will have a very dim view of other mature males trying to encroach upon his territory, whereas in large groups the dominant male is more tolerant of competitors and they’re much less likely to settle things by fighting with each other – the reason why fighting is less common is probably because the male in the latter group is in general unable to monopolize access to the females because of the size of the group, so you to some extent ‘gain less’ by achieving alpha male status), and when he’s likely to act (a young male is stronger than an old male and he can also expect to maintain his tenure as the top male for a longer period of time – so males who try to achieve top rank by fighting for it are likely to be young, whereas males who achieve top rank by other means tend to be older). Whether or not females reproduce in a seasonal pattern also matters. It’s obvious from the data that it’s far from random how and at which point during their lives males make their transfer decisions, and how they settle conflicts about who should get the top spot. The approach in that chapter reminded me a bit of optimal foraging theory stuff, but they didn’t talk about that kind of stuff at all in the chapter. Here’s what they concluded from the data they presented in the chapter:

“We found not only variation between species but also remarkable variation within species, or even populations, in the effect of group size on paternity concentration and thus transfer decisions, as well as mode of rank acquisition and likelihood of natal transfer. This variability suggests that a primate male’s behaviour is guided by a set of conditional rules that allow him to respond to a variety of local situations. […] Primate males appear to have a set of conditional rules that allow them to respond flexibly to variation in the potential for paternity concentration. Before mounting a challenge, they assess the situation in their current group, and before making their transfer decisions they monitor the situation in multiple potential-target groups, where this is possible.”

October 14, 2014 Posted by | Biology, Books, Evolutionary biology, Zoology | Leave a comment

A brief note on diabetes cures

I friend pointed me to a Danish article talking about this. I pointed out a few problems and reasons to be skeptical to my friend, and I figured I might as well share a few thoughts on these matters here as well. I do not have access to my library at the present point in time, so this post will be less well sourced than most posts I’ve written on related topics in the past.

i. I’ve had diabetes for over 25 years. A cure for type 1 diabetes has been just around the corner for decades. This is not a great argument for assuming that a cure will not be developed in a few years’ time, but you do at some point become a bit skeptical.

ii. The type of ‘mouse diabetes’ people use when they’re doing research on animal models such as e.g. NOD mice, from which many such ‘breakthroughs’ are derived, is different from ‘human diabetes’. As pointed out in the reddit thread, “Doug’s group alone has cured diabetes in mice nearly a dozen times”. This may or may not be true, but I’m pretty sure that at the present point in time my probability of being cured of diabetes would be significantly higher if I happened to be one of those lab mice.

iii. A major related point often overlooked in contexts like these is that type 1 diabetes is not one disease – it is a group of different disorders all sharing the feature that the disease process involved leads to destruction of the pancreatic beta-cells. At least this is not a bad way to think about it. This potentially important neglected heterogeneity is worth mentioning when we’re talking about cures. To talk about ‘type 1 diabetes’ as if it’s just one disease is a gross simplification, as multiple different, if similar, disease processes are at work in different patients; some people with ‘the disease’ get sick in days or weeks, in others it takes years to get to the point where symptoms develop. Multiple different gene complexes are involved. Prognosis – both regarding the risk of diabetes-related organ damage and the risk of developing ‘other’ autoimmune conditions (‘other’ because it may be the same disease process causing the ‘other’ diseases as well), such as Hashimoto’s thyroiditis – depends to some extent on the mutations involved. This stuff relates also to the question of what we mean by the word ‘cure’ – more on this below. You might argue that although diabetics are different from each other and vary in a lot of ways, the same thing could be said about the sufferers of all kinds of other diseases, such as, say, prostate cancer. So maybe heterogeneity within this particular patient population is not that important. But the point remains that we don’t treat all prostate cancer patients the same way, and that some are much easier to cure than others.

iv. The distinction between types (type 1, type 2) makes it easy to overlook the fact that there are significant within-group heterogeneities, as mentioned above. But the complexity of the processes involved are perhaps even better illustrated by pointing out that even between-group distinctions can also sometimes be quite complicated. The distinction between type 1 and type 2 diabetes is a case in point; usually people say only type 1 is auto-immune, but it was made clear in Sperling et al.’s textbook that that’s not really true; in a minority of type 2 diabetics autoimmune processes are also clearly involved – and this is actually highly relevant as these subgroups of patients have a much worse prognosis than the type 2 diabetics without autoantibody markers, as they’ll on average progress to insulin-dependent disease (uncontrollable by e.g. insulin-sensitizers) much faster than people without an auto-immune disease process. In my experience most people who talk about diabetes online, also well-informed people e.g. in reddit/askscience threads, are not (even?) aware of this. I mention it because it’s one obvious example of how within-group hidden heterogeneities can have huge relevance for which treatment modalities are desirable or useful. You’d expect type 2’s with auto-immune processes involved would need a different sort of ‘cure’ than ‘ordinary type 2’s’. For a little more on different ‘varieties’ of diabetes, see also this and this.

There are as already mentioned also big differences in outcomes between subgroups within the type 1 group; some people with type 1 diabetes will end up with three or four ‘different'(?) auto-immune diseases, whereas others will get lucky and ‘only’ ever get type 1 diabetes. Not only that, we also know that glycemic control differences between those groups do not account for all the variation in between-group differences in outcomes in terms of diabetes-related complications; type 1 diabetics hit by ‘other’ auto-immune processes (e.g. Graves’ disease) tend to be more likely to develop complications to their diabetes than the rest, regardless of glycemic control. Would successful beta-cell transplants, assuming these at some point become feasible, and achieved euglycemia in that patient population still prevent thyroid failure later on? Would the people more severely affected, e.g. people with multiple autoimmune conditions, still develop some of the diabetes-related complications, such as cardiovascular complications, even if they had functional beta cells and were to achieve euglycemia, because those problems may be caused by disease aspects like accelerated atherosclerosis to some extent perhaps unrelated to glycemic control? These are things we really don’t know. It’s very important in that context to note that most diabetics, both type 1 and type 2, die from cardiovascular disease, and that the link between glycemic control and cardiovascular outcomes is much weaker than the one between glycemic control and microvascular complications (e.g., eye disease, kidney disease). There may be reasons why we do not yet have a good picture of just how important euglycemia really is, e.g. because glucose variability and not just average glucose levels may be important in terms of outcomes (I recall seeing this emphasized recently in a paper, but I’m not going to look for a source) – and Hba1c only account for the latter. So maybe it does all come back to glycemic control, it’s just that we don’t have the full picture yet. Maybe. But to the extent that e.g. cardiovascular outcomes – or other complications in diabetics – are unrelated to glycemic control, beta-cell transplants may not improve cardiovascular outcomes at all. One potential cure might be one where diabetics get beta-cell transplants, achieve euglycemia and are able to drop the insulin injections – yet they still die too soon from heart disease because other aspects of the disease process has not been addressed by the ‘cure’. I don’t think at the current point in time that we really know enough about these diseases to really judge if a hypothetical diabetic with functional transplanted beta-cells may not still to some extent be ‘sick’.

v. If your cure requires active suppression of the immune system, not much will really be gained. A to some people perhaps surprising fact is that we already know how to do ‘curative’ pancreas transplants in diabetics, and these are sometimes done in diabetic patients with kidney failure (“In most cases, pancreas transplantation is performed on individuals with type 1 diabetes with end-stage renal disease, brittle diabetes [poor glycemic control, US] and hypoglycaemia unawareness. The majority of pancreas transplantation (>90%) are simultaneous pancreas-kidney transplantation.” – link) – these people would usually be dead without a kidney transplant and as they already have to suffer through all the negative transplant-related effects of immune suppression and so on, the idea is that you might as well switch both defective organs now you’re at it, if they’re both available. But immune suppression sucks and these patients do not have great prognoses so this is not a good way to deal with diabetes in a ‘healthy diabetic’; if rejection problems are not addressed in a much better manner than the ones currently available in whole-organ-transplant cases, the attractiveness of any such type of intervention/’cure’ goes down a lot. In the study they tried to engineer their way around this issue, but whether they’ve been successful in any meaningful way is subject to discussion – I share ‘SirT6”s skepticism at the original reddit link. I’d have to see something like this working in humans for some years before I get too optimistic.

vi. One final aspect is perhaps noting. Even a Complete and Ideal Cure involving beta-cell transplants in a setting where it turns out that everything that goes wrong with all diabetics is really blood-glucose related, is not going to repair the damage that’s already been done. Such aspects will of course matter much more to some people than to others.

October 12, 2014 Posted by | Diabetes | Leave a comment

Issues

I’m spending time at my parents’ place at the moment, and that’s actually the only reason why I’m able to write this blog post; the old laptop I’ve been using finally decided to break down earlier this evening. You should expect limited blogging in the week to come – I’m completely cut off at the moment, far away from stuff like computer stores.

I recently realized that the computer- and internet issues have really made blogging a chore, in a way – I’m reading books I can’t easily blog either because my notes keep getting lost or because I’ve lately deliberately been reading books offline (/AFK) to get around these problems. I used to be able to just write a post when I felt like it, but that’s not been the case for the last month; I’ve felt like I had to blog when I had internet, and if I wanted to write when I did not have internet the best I could do was to make a draft using e.g. Word; and I never liked that option. To make it worse, the updating frequency the blog used to have (I don’t know what’ll happen in the weeks to come) was actually quite high, considering the amount of work that’s put into each post; it’s really been far from trivial to just ‘put together a blogpost while internet is up’. Notes and highlights intended among other things to facilitate blogging coverage later on have as mentioned been lost due to hardware issues, and this is actually a much bigger deal than you might think; it’s not that losing the notes put me completely back to square one, but for practical purposes losing my notes have meant that in order for me to provide the same type of coverage I’ve usually provided I’ve had to basically read some of these books twice. Not really, but close enough for it to feel that way. I’ve spent roughly 12 hours or so on Chamberlain’s Symptoms and Signs in Clinical Medicine over the last few days, and now I can basically start over if I want to blog it the way I’d intended to do it – so instead of a post about that book now, you get this post instead (and I may never blog the book in the amount of detail I’d intended to and would have, if not for these issues). I’m not happy about that, but that’s the way it is. Given that I’ve been thinking about stopping blogging completely lately because it just feels like too much work – it does, now, compared to before – I’ll care a lot less about the updating frequency in the time to come than I’ve done in the past. Blogging should be enjoyable, and it used to be very enjoyable, but these issues have been killing my desire to write here.

October 10, 2014 Posted by | meta, Personal | Leave a comment

Sexual Selection in Primates – New and comparative perspectives (I)

3139

2271

1592

(Somehow all of these seemed relevant… Click to view full size. Links: 1, 2, 3. This one is probably relevant as well.)

Okay, here’s the short version: This book is awesome – I gave it five stars and added it to my list of favourites on goodreads.

It’s the second primatology text I read this year – the first one was Aureli et al.; my coverage of that book can be found here, here and here. I’ve also recently read a few other texts as well which have touched upon arguably semi-related themes; books such as Herrera et al., Gurney and Nisbet, Whitmore and Whitmore, Okasha, Miller, and Bobbi Low. Some of the stuff covered in Holmes et al. turned out to be relevant as well. I mention these books because this book is aimed at graduates in the field (“Sexual Selection in Primates is aimed at graduates and researchers in primatology, animal behaviour, evolutionary biology and comparative psychology“), and although my background is different I have as indicated read some stuff about these kinds of things before – if you know nothing about this stuff, it may be a bit more work for you to read the book than it was for me. I still think you should read it though, as this is the sort of book everybody should read; if they did, people’s opinions about extra-marital sex might change, their understanding of the behavioural strategies people employ when they go about being unfaithful might increase, single moms would find it easier to understand why their dating value is lower than that of their competitors without children, and new dimensions of friendship dynamics – both those involving same-sex individuals and those involving individuals of both sexes – might enter people’s mental model and provide additional angles which might be used by them to help explain why they, or other people, behave the way they do. To take a few examples.

Most humans are probably aware that many males in primate species quite closely related to us habitually engage in activities like baby-killing or rape, and that they do this because such behavioural strategies lead to them being more successful in the fitness context. However they may not be aware that females of those species have implemented behavioural strategies in order to counteract these behaviours; for example females may furtively sleep around with different males in order to confuse the males about who’s the real father of their offspring (you don’t want to kill your own baby), or they may band up with other females, and/or perhaps a strong male, in order to obtain protection from the potential rapists. I mention this in part because a related observation is that it should be clear from observing humans in their natural habitat that most human males are not baby-killers or rapists, and such an observation might easily lead people who have some passing familiarity with the field to think that a lot of the stuff included in a book like this one is irrelevant to human behaviour; a single mom is unlikely to hook up with a guy who kills her infant, so this kind of stuff is probably irrelevant to humans – we are different. I think this is the wrong conclusion to draw. What’s particularly important to note in this context is that counterstrategies are reasonably effective in many primate species, meaning for example that although infanticide does take place in wild primate species, it doesn’t happen that often; we’ve in some respects come a bit further than other species in terms of limiting such behaviours, but in more than a few areas of social behaviour humans actually seem to act in a rather similar manner to those baby-killing rapists and their victims. It’s also really important to observe that sexual conflict is but one of several types of conflicts which organisms such as mammals face, and that the dynamics of such conflicts and aspects like how they are resolved have many cross-species similarities – see Aureli et al. for an overview. It’s difficult and expensive to observe primates in the wild, but when you do it it’s not actually that hard to spot many precursors of- or animal equivalents of various behaviours that humans engage in as well. Some animals are more like us than people like to think, and the common idea that humans are really special and unique on account of our large brains may to some extent be the result of a lack of knowledge about how animals actually behave. Yep, we are different, but perhaps not quite as different as people like to think. Some of the behaviours we like to think of as somehow ‘irreducible’ probably aren’t.

Observations included in a book like this one may well change how you think about many things humans do, at least a little. Humans who are not sexually active have the same evolutionary past as those that are, which means that their behaviours are likely to be and have been shaped by similar mechanisms – an important point being that if even someone like me, who at the moment consider it a likely outcome that I’ll never have sex during my lifetime, is capable of finding stuff covered in a book such as this one to be relevant and useful, there are probably very few people who wouldn’t find some of the stuff in there relevant and useful to some extent. Modern humans face different decision variables and constraints than did our ancestors, but the brains we’re walking around with are to a significant extent best thought of as the brains of our ancestors – they really haven’t changed that much in, say, the last 100.000 years, and some parts of the ‘code’ we walk around with are literally millions of years old. You need to remember to account for stuff like birth control, ‘culture’ and institutions when you’re dealing with human sexual behaviours today, but a lot of other stuff should be included as well, and books like this one will give you another piece of the puzzle. An important piece, I think.

Although there’s a limited amount of mathematics in this book (mostly limited to an infanticide model in chapter 8), as you can imagine given the target group the book is really quite dense. There’s way too much good stuff in this book for me to cover all of it here, and I don’t know at this point how detailed my coverage of the book will end up being. A lot of details will be left out, regardless of how many posts I decide to give this book – more than a few chapters are of such high quality that I could easily devote an entire post to each of them. If the stuff I include in my posts sparks your interest, you’ll probably want to read the rest of the book as well.

“In this review I have emphasised five points that modern students of sexual selection ought to keep in mind. First, the list of mechanisms of sexual selection is longer than just the two most famous examples of male-male combat and female choice. Male mate choice and female-female competition are two frequently noted possibilities. Other between-sex social interactions that can result in sexual selection include male coercion of females […] and female resistance to male coercion or manipulation […] sexual selection among females should be as important as male sexual selection to dynamical interactions between the sexes. Sexual selection among females will favour resistance to male attempts to manipulate and control them […] Second, even when a mechanism of intersexual selection depends on interactions between members of opposite sexes, the important thing for selection is the variance in reproductive success among members of one sex. Think about female mate choice for a moment. Whenever choosers discriminate, mate choice may cause variation among the chosen in mating and reproductive success […] Thus, mate choice is a mechanism of sexual selection because it theoretically results in variance among individuals of the chosen sex in mating success and perhaps other components of fitness. […] Third, sexual selection can result in individual tradeoffs among the components of fitness […] Fourth, for a trait to be under selection, there must be variation in the trait. For sexual selection to operate the trait variation must be among individuals of the same sex. […] To argue that an opportunity for sexual selection exists, variation among same-sex individuals in reproductive success must exist. Fifth, between-sex variances in reproductive success alone are […] an insufficient basis for the conclusion that sexual selection operates […], as within-sex variances may arise because of random, non-heritable factors”

“In summary, sex roles fixed by past selection from anisogamy or from parental investment patterns so that females are choosy and males indiscriminate are currently questionable for many species. The factors that determine whether individuals are choosy or indiscriminate seem relatively under-investigated.” (One factor which does seem to be important is the encounter frequency with potentially mating opposite-sex individuals; this variable (how often do you meet a potential partner?) has been shown to affect the sexual behaviours of individuals in species as diverse as fruit flies, fish and butterflies).

“Because most primates live in stable, long-lasting social groups, pressures for direct sexually selected communication cues may be less than in species with ephemeral mating groups or frequent pairings. Primates are likely to accumulate information about competitors and mates from many sources over a longer time frame. […] Although there do appear to be some communication signals that may be sexually selected, it may be best to consider these signals as biasing factors rather than the determinants of mate choice. For primates, human and non-human, as well as for Japanese quails, gerbils, rats and blue guramis, there is more to successful reproduction than simply responding to a sexually selected cue. Although I might be initially attracted to a woman with the ‘correct’ breast-to-waist-to-hip ratios, a symmetric face and all of the other hypothesised sexually selected cues, I will quickly learn if she is intelligent or not, if she is emotionally stable, and many other things that should be more important in my reproductive decisions than mere appearance. It is important to keep this in mind in any discussion of sexual selection. […] The strongest evidence, so far, for intersexual selection of traits is observed in female primates, suggesting that male mate choice and female competition may be as important as male competition and female mate choice. […] The data suggest that intersexual selection is as strong if not stronger on female primates than on males.” [As should be very clear at this point, male primates do have standards, despite what the third cartoon at the beginning of this post would have you believe…]

“One form of polyandry that has received much attention is extra-pair copulation (EPC) – sex that a female with a social mate has with a male who is not the social mate. […] Because an evolved adaption is a product of past direct selection for a function, the question of whether EPC by women is currently adaptive or currently advances women’s reproductive success (RS) is a distinct one. An evolved adaption may be currently non-adaptive and even maladaptive because the current ecological setting in which it occurs is different from the evolutionary historical setting that was the selection favouring it […] Female EPC is not a rare occurence in humans. […] Female EPC may be a relatively common occurence now. But was it sufficiently common in small ancestral populations of humans or pre-hominid primates to be an effective selective force of evolution? Evidence suggests yes, and perhaps the best evidence comes from design features of men rather than women. Men, but not women, can be duped about parentage as a result of EPC, leading to the unknowing investment in another man’s offspring. Men show a rich diversity of mate guarding and anti-cuckoldry tactics ranging from sexual jealousy, vigilance, monopolising a mate’s time, pampering a mate, threatening a mate with harm if she shows interest in other men, and adjusting ejaculate size to defend against the mate’s insemination by a competitor […] Some mate guarding tactics appear to be conditional, such that men guard mates of high fertility status (young or not pregnant) more intensely than ones of low-fertility status (older or pregnant) […] and hence appear not to be caused by general male-male competitive strivings but rather concern for fidelity of a primary social mate […] We […] asked women in [a] study to report their primary mate’s mate-retention tactics. Our questionnaire measures two major dimensions, ‘proprietariness’ and ‘attentiveness’. Women reported their partners to be higher on both when fertile [i.e., mid-cycle].”

“Women’s preferences shift across the [menstrual] cycle in a number of ways. They particularly prefer the scent and faces of more symmetrical men when fertile. The face they find most attractive when fertile is more masculine than the face they most prefer when not fertile. They prefer more assertive, intrasexually competitive displays when fertile than when not. [An example: “The behaviours of men being interviewed by women for a lunch date were coded for a host of verbal and non-verbal qualities [by Gangestad et al.]. Through principal components analysis of these codes, two major dimensions along which men’s performance varied were identified; ‘social presence’, marked by a man’s composure, his direct eye contact and lack of downward gaze, as well as a lack of self-deprecation, and emphasis that he’s a ‘nice guy’; and ‘direct intrasexual competitiveness’, marked by a man’s explicit derogation of his competitor and statements to the effect that he is the better choice, as well as not being obviously agreeable.”] Furthermore, evidence indicates that their preferences when evaluating men as sex partners (i.e. their sexiness) is particularly affected; evidence shows that their evaluations of men as long-term partners shift little, if at all. […] symmetrical men appear to invest less time in and are less faithful to their primary relationship partners […] [The] pattern of findings suggests that it is not simply the case that all traits preferred by females are particularly preferred mid-cycle; that fertility status simply enhances existing preferences. Rather, it appears that only specific preferences are enhanced – perhaps those for features that ancestrally were indicators of genetic benefits. Preferences for features particularly important in long-term investing mates may actually be more prominent outside the fertile period.”

“STDs typically have been viewed as a curious group of parasites rather than established entities with important selective effects on their hosts […]. In recent decades, this view has changed, primarily through our increased understanding of HIV […] [There are] at least three major costs of STDs: (1) A large proportion of STDs increase the risk of sterility in males and females. (2) STDs commonly exhibit vertical transmission, with severe consequences for offspring health [see also thisHolmes et al. covers this stuff in some detail and actually the authors refer to an older version of that book in this context]. (3) Relative to infectious disease transmitted by non-sexual contact, STDs commonly exhibit long infectious periods with low host recovery, failure to clear infectious organisms following recovery, or limited immunity to reinfection. […] Many negative consequences of STD infection probably provide benefits to the parasites themselves, increasing the likelihood of invasion, transmission and persistence […] In mammals, for example, host infertility is likely to result in repeated cycling by females and may consequently increase their number of sexual contacts. [Mind blown! I’d never even thought about this.] Primates offer an important opportunity to test this hypothesis, because the frequency of infertile females within wild groups may exceed 10 per cent […]. Similarly, STDs that increase host mortality or possess short infectious periods are less likely to survive until the next breeding season, when contact is established with new, uninfected hosts […] Thus, in addition to long infectious periods, STDs tend to produce less disease-induced mortality relative to other infectious diseases”

“Because sexual reproduction offers an important mechanism for disease spread and may even be influenced by infection status, it is pertinent to ask whether animals can identify infected individuals and avoid mating with them. Symptoms such as visible lesions, sores, discharge around the genitalia or olfactory cues may provide evidence of infection. […] many human STDs are […] characterized by limited symptoms or, in the case of viruses, asymptomatic shedding […] reproductive success of an STD is correlated with partner exchange and successful matings of infected hosts. Therefore, virulent parasites that produce outward signs of infection will experience decreased transmission because they provide conspicuous cues for choosy members of the opposite sex to avoid infected mates. […] A parasite faces two main barriers, or defences, imposed by the host: behavioural counter-strategies to avoid exposure, and physical or immune defences […]. The order of events can vary, but behavioural mechanisms commonly are viewed as the first line of defence. An important point we wish to emphasise is that host behaviour to avoid exposure prior to mating is likely to have other reproductive costs, and these costs may outweigh their benefits. […] male and female behaviour indicates that STD risk is of secondary importance relative to other selective pressures operating on mating success. Females mate polyandrously to reduce infanticide risk […] and, for similar reasons, they prefer novel males, though risking infection with STDs acquired from other social groups. Males prefer females of intermediate age that have already produced offspring, as these females have high reproductive value […]. Both sets of decisions by males and females are expected to increase exposure to STDs by increasing the number of partners and mating events.”

October 6, 2014 Posted by | Anthropology, Biology, Books, Evolutionary biology, Zoology | Leave a comment

Adolescents and Adults with Autism Spectrum Disorders (2)

I read the rest of the book, but I didn’t particularly like the last part either – I gave the book one star on goodreads. A couple of the last chapters were sort of okay, but they were not good enough for me to change my mind about the book in general.

“Whether the child who is suspected as being on the autism spectrum is “high” or “low” functioning, it is important to ascertain a sense of their global functioning particularly in the area of ADLs [activities of daily living]. Many professionals erroneously assume a “highfunctioning” individual with perhaps, Asperger syndrome functions in the community at an age appropriate level. Professionals are often beguiled by “Aspies” vocabularies and intelligence. They may think it is unnecessary to conduct an assessment of the person’s adaptive functioning because his or her IQ is in the normal to superior range. However, because of impairments in social communication, executive functioning, and the ability to read facial expressions and non-literal communication, individuals with Asperger syndrome can have extreme difficulty functioning on a daily basis. […] many are diagnosed in adulthood […]. For a community-based sample in Canada, 48 % of individuals with “high-functioning autism” and AS were not diagnosed until they were 21 years or older (Stoddart et al., 2013). […] In higher-functioning individuals with average to above-average IQs, there is often a “cloak of competence” […] or an assumption of competence when it comes to […] basic life skills. Adults who are highly accomplished in their work may have basic problems with organization at home, or for tasks which are of less interest […] outcome is variable, as some individuals with very high IQ face significant challenges in adulthood. […] Depressed mood may be particularly common in high functioning adults, who have insight into their social and adaptive difficulties, and who may desire to make changes but have limited success in doing so”

“higher functioning individuals who have a considerable vocabulary may still have a [speech] disability. Individuals with Asperger syndrome for example have an impressive vocabulary even at a young age, but they have impairments in the semantics and pragmatics of speech. They may also have issues with prosody that need to be assessed and addressed.”

“Understanding which variables predict adult outcomes in ASD is a crucial goal for the field, but we know little about what variables predict different outcomes. […] Longitudinal studies of ASD from childhood to adulthood have consistently yielded only two useful prognostic factors for adult outcome in ASD. A childhood IQ score in the near-average or average ranges (i.e., ≥70) and communicative phrase speech before 6 appear necessary but insufficient for a person to access a moderate level of independence in adulthood […] Individuals having these childhood characteristics have widely varying adult outcomes, so exhibiting these characteristics in childhood is no guarantee that a person will achieve adult independence. The predictive utility of other childhood variables has been examined. Findings regarding severity ratings of childhood ASD symptoms have been mixed”

“Almost all studies that have examined developmental trajectories for individuals with ASDs show that these individuals exhibit reductions in autistic symptoms over time […] Symptoms of ASD tend to diminish both in severity and number. The most improvement has usually been recorded for participants with IQ scores in the normal range and the least severe symptom presentations at their initial evaluation […] Published reports also indicate that there are subgroups of individuals with ASD who experience marked change in the course of their development at some point, either as deterioration or dramatic improvement. […] This phenomenon was […] noted in a Japanese sample of 201 young adults, although marked improvement was also described […]. Roughly one-third of this sample experienced a marked deterioration in behavior, most often occurring after age 10. The change occurred after age 20 in six cases. Declines were characterized by specific skill regressions or by increases in hyperactivity, aggression, destructiveness, obsessive behavior, or stereotyped behaviors. Notable improvements in the developmental course occurred in 43 % of the sample […] Improvements occurred between the ages of 10 and 15 years for most participants. No predictable antecedents to changes in the developmental course have been noted in previous studies. […] Significant improvements in verbal communication abilities have been reported on the ADI-R, although findings related to nonverbal communication have been mixed.”

“While not a core diagnostic domain at this time, difficulties processing sensory information are common in people with ASD and considered an associated feature […]. In a study of sensory processing difficulties in 18 adults with ASD and normal-range IQ scores aged 18–65, Crane, Goddard, and Pring (2009) found that, compared to matched controls, adults with ASD reported more experiences with low registration (i.e., responding slowly or not noticing sensory stimuli), sensitivity to sensory input, and avoiding sensations. All but one person reported extreme scores in at least one area.”

“Outcome classifications usually include five nodes ranging from Very Poor (i.e., the person cannot function independently in any way) to Very Good (i.e., achieving great independence; having friends and a job). There is considerable variation in outcomes for samples studied, but in general outcome for approximately 60 % of individuals with ASD is considered Fair, Poor, or Very Poor […]  Most outcome studies indicate that few adults with ASD develop significant relationships
outside of their families of origin. […] It is [however] likely that the majority of adults with ASD who also have normal range intellectual abilities have not been diagnosed, and many of these individuals may have married or developed other close relationships outside of their families of origin. […] In terms of outcome studies to date, very few adults with ASD have been reported to have successful, long-term romantic relationships […]. Some outcome studies indicate that no participants or only one participant has been involved in a romantic relationship […]. One-third to half of adults in outcome studies have friendships outside of their families […]. Almost 75 % of family members reporting on the sample described by Eaves and Ho (2008) indicated that they enjoyed good to excellent relationships with their relative with ASD. Similar results have been found in other studies of adults with A[S]D […]. Females have reportedly experienced greater success with peer relationships than males […]. Between 10 and 30 % of adults in recent studies (Eaves & Ho, 2008; Engstrom et al., 2003; Farley et al., 2009) had experience in a romantic relationship. […] Roughly one-third of adults with normal-range IQ scores in outcome studies are employed, inclusive of regular, full-time work, part-time or volunteer work, supported employment, and sheltered employment. […] Roughly 40 % of participants in outcome studies have been prescribed medications for psychiatric conditions or to control behavior”

“Ganz (2007) estimated the societal costs of ASD across the lifespan, calculating a total per capita societal cost for an individual with ASD at over $3 million. The most expensive period was early childhood, at which time many children are undergoing diagnostic studies, receiving medical treatments, and participating in intensive intervention programs. Costs for young adults (ages 23 through 27) were estimated to be $404,260 (in 2003 dollars). Ganz calculated direct medical, direct nonmedical, and indirect costs. Direct medical costs are expenses incurred in the course of medical care, and these tended to be lower for adults than other age groups. Direct nonmedical costs include adult support services and employment services, as examples. This life period often involves much trial-and-error while families attempt to identify services that will result in a good fit with their son or daughter. Direct nonmedical costs were higher in this age group than any other, estimated at $27,539 over this 5-year period. Indirect costs mainly comprise lost productivity costs, as when family members must leave work or reduce hours in order to support their family member. These costs are also the highest for this age group because adult children may remain dependent on their parents, but are also technically old enough to enter the workforce. Therefore, lost productivity costs are calculated for parents as well as the young adult, a phenomenon unique to this life period. Societal costs diminish as adults with ASD age, so that someone aged 48 through 52 incurs less than half the cost of someone aged 23 through 27.”

“Virtually nothing is known about ASDs in late life. Questions at the forefront about this period are related to brain changes, the transfer of care from parents to other family members or human service agencies as parents become unable to care for their adult children through illness or death, and the adequacy of existing services to care for this population. There are many questions about the nature of changes in the brain in adults ASD as they approach old age. We do not know whether or not they experience memory problems at a similar rate to adults in general population, nor whether they may experience earlier onset of dementia and increased rates of dementia”

Chapter 15 was actually sort of okay and deals with a community survey conducted in Britain where some researchers tried to figure out how many autistics are out there, undiagnosed, in the community, and how well those individuals are actually doing compared to other people. Instead of going into the details of that chapter I’ll just point you to the original paper they describe in the text. The paper behind chapter 16, which I’ll talk a little bit about below, is available here.

“While evidence is accumulating regarding the benefits of psychosocial interventions for adults with ASD, there have been no systematic reviews or meta-analyses conducted to summarize the cumulative evidence base for these approaches. Therefore, we conducted a systematic review to examine the evidence base of psychosocial interventions for adults with ASD […] An extensive literature search was conducted in order to locate published studies documenting interventions for adults with ASD […] These searches revealed 1,217 published reports. Additionally, references of relevant studies were examined for additional studies to be included in this research. […] From these abstract searches, studies were then examined and included in this review if they (1) were conducted using a single case study, noncontrolled trial, non-randomized controlled trial, or RCT design that reported pretest and post-test data, (2) reported quantitative findings, (3) included participants ages 18 and older, and (4) included participants with ASD. In total, 13 studies assessing psychosocial interventions for adults with ASD were found. […] The included studies were diverse in their methodologies and represented numerous categories of interventions. A total of five were single case studies, four were RCTs, three were non-randomized controlled trials, and one was an uncontrolled pre–post trial. Six studies evaluated the efficacy of social cognition training, five studies evaluated the efficacy of applied behavior analysis (ABA) techniques, and two studies evaluated the efficacy of other types of communitybased interventions. […] All of the included ABA studies were single case studies. […] All ABA studies reported positive benefits of treatment, although the maintenance of this benefit varied between studies. Effect size was not reported for the ABA studies, as findings were based on a single subject. […] As a whole, the studies identified had modest sample sizes, with the greatest including 71 participants and over three-quarters of studies having less than 20 participants.”

“there are significant limitations to the current evidence base. While we conducted an extensive search of the literature available on psychosocial interventions for adults with ASD since 1950, only 13 studies were found. Due to the small number of studies, we were unable to conduct a meta-analysis of the adult ASD literature. As a consequence, clear estimates of effect size for different types of psychosocial interventions are not available. Effect sizes should also be interpreted with caution, especially for studies with small sample sizes, which comprised the majority of studies. The incongruent nature of outcome measures used in some of the included studies also indicate that the reader should take caution before generalizing the results of included studies. For instance, García-Villamisar & Hughes (2007) used cognitive functioning outcomes, such as the Stockings of Cambridge and Big Circle/Little Circle tasks, to measure the effectiveness of a supported employment program but did not report outcome data on the number of adults with ASD who were employed as a result of the program.”

October 4, 2014 Posted by | autism, Books, Psychology | Leave a comment

The Endocrine System at a Glance (II)

Some general comments about the book can be found in my first post about it. I don’t have a lot of other things to add, but I do want to cover some more stuff from the book. Below some observations from the chapters about human reproduction, as well as some stuff on energy homeostasis.

“The ovum and sperm pronuclei fuse to form the zygote, which […] has the normal diploid chromosomal number […]. The zygote divides mitotically as it travels along the uterine tube, and at about 3 days after fertilization enters the uterus, when it is now a morula. The cells of the morula continue to divide to form a hollow sphere, the early blastocyst, consisting of a single layer of trophoblast cells and the embryoblast, an inner core of cells which will form the embryo. The trophoblast, after implantation, will form the vascular interface with the maternal circulation. After around 2 days in the uterus, the blastocyst is accepted by the endometrial epithelium under the influence of estrogens, progesterone and other endometrial factors. This embedding or implantation process triggers the ‘decidual response’, involving an expansion of a space, the decidua, to accommodate the embryo as it grows. The invasive trophoblast proliferates into a protoplasmic cell mass called a syncitiotrophoblast, which will eventually form the uteroplacental circulation. By about 10 days, the embryo is completely embedded in the endometrium.
If the ovum is fertilized and becomes implanted, the corpus luteum does not regress, but continues to secrete progesterone, and within 10–12 days after ovulation the syncitiotrophoblast begins to secrete human chorionic gonadotrophin (hCG) into the intervillous space. Most pregnancy tests are based on the detection of hCG, which takes over the role of luteinizing hormone (LH) and stimulates the production of progesterone, 17-hydroxyprogesterone and estradiol by the corpus luteum. Plasma levels of hCG reach a peak between the ninth and fourteenth week of pregnancy, when luteal function begins to fade, and by 20 weeks, both luteal function and plasma hCG have declined.
The syncitiotrophoblast secretes another hormone, human placental lactogen (hPL) […]. Its function may be to inhibit maternal growth hormone production, and it has several metabolic effects, notably glucose-sparing and lipolytic, possibly through its anti-insulin effects. […] The corpus luteum synthesizes relaxin, which relaxes the uterine muscle […] Progesterone concentrations rise progressively during pregnancy, and a major function of the hormone is thought to be its action, together with relaxin, to inhibit uterine motility, partly by decreasing its sensitivity to oxytocin […] A[n] important role of estrogens is to stimulate the steady rise in maternal plasma prolactin. Prolactin […] is the postpartum lactogenic hormone […] The placenta, which takes over the production of the hormones of pregnancy from the corpus luteum, is part of what is termed the fetoplacental unit. The placenta attains its mature structure by the end of the first trimester of pregnancy. […] The placenta is not only an endocrine organ, but also provides nutrients for the developing fetus and removes its waste products. […] The placenta lacks 17-hydroxylase and therefore cannot produce androgens. This is done by the fetal adrenal glands, and the androgens thus formed are the precursors of the estrogens. The placenta converts maternal and fetal dehydroepiandrosterone sulphate (DHEA-S) to testosterone and androstenedione, which are aromatized to estrone and estradiol. Another enzyme lacking in the placenta is 16-hydroxylase, so the placenta cannot directly form estriol and needs DHEA-S as substrate.”

“Normal fertility in the male is produced by a complex interaction between genetic, autocrine, paracrine and endocrine function. The endocrine control of reproductive function in the male depends upon an intact hypothalamo–pituitary– testicular axis. The testis has a dual role – the production of spermatozoa and the synthesis and secretion of testosterone needed for the development and maintenance of secondary sexual characteristics and essential for maintaining spermatogenesis. These functions in turn depend upon the pituitary gonadotrophin hormones: luteinizing hormone (LH; required to stimulate testicular Leydig cells to produce testosterone); and follicle stimulating hormone (FSH; required for the development of the immature testis and a possible role in adult spermatogenesis). Gonadotrophin production occurs in response to stimulation by hypothalamic GnRH. Testosterone exerts a negative feedback on the secretion of LH and FSH and the hormone inhibin-β, also synthesized by the testis, has a specific regulatory role for FSH.”

(A thought which occurred to me while reading these sections of the book: ‘It is fortunate that living organisms do not need to understand in detail how their reproductive systems work in order to have offspring…’)

“The term ‘functional disorders’ is used to describe a group of conditions [disorders of reproductive function in females] in which there are no structural or endocrine synthetic abnormalities in the pituitary–ovarian axis. Hypothalamic amenorrhoea is usually associated with weight-reducing diets, often with excess exercise […] It is the commonest cause of secondary amenorrhoea seen in endocrine clinics. Although a reduction in weight to 10% below ideal body weight is usually associated with amenorrhoea, there is wide variation between women. Changes in body composition, particularly reduced fat mass, are crucial to the characteristic hypothalamic changes of impaired GnRH secretion, loss of gonadotrophin pulsatility and subsequent hypogonadotrophic hypogonadism […]. The treatment of weight- and exercise-related amenorrhoea is specifically weight gain and reduction in exercise. […] Untreated, hypothalamic amenorrhoea is associated with reduced bone mineral density and ultimately osteoporosis. Women with long-term hypoestrogenaemia should have their bone density recorded and, if there is significant osteopaenia or osteoporosis, combined estrogen/progesterone replacement therapy should be considered.”

“In birds and mammals, testosterone sexually differentiates the fetal brain. The fetal brain contains androgen and estrogen receptors, which mediate these actions of testosterone. […] there is evidence that testosterone causes changes in the fetal brain during sexual differentiation of the brain at about 6 weeks.”

“The precise nature of the influence of testosterone on behaviour is unknown, due in part to the limitations of methods of study. In humans, there is no apparent relationship between plasma levels of testosterone and sexual or aggressive behaviour. It seems that behaviour has a powerful influence on testosterone production, since stress drives it down, as does depression and threatening behaviour from others. In captive primate colonies, subordinate males have raised prolactin and very much reduced plasma levels of testosterone.”

“About 120 million sperm are produced each day by the young adult human testis. Most are stored in the vas deferens and the ampulla of the vas deferens, where they can remain and retain their fertility for at least 1 month. While stored, they are inactive due to several inhibitory factors, and are activated once in the uterus. In the female reproductive tract, sperm remain alive for 1 or 2 days at most.”

“Blood pressure is raised (i) when the heart beats more powerfully (positive inotropic effect); (ii) when arterioles constrict, increasing the peripheral resistance; (iii) when fluid and salts are retained; and (iv) through the influence of cardiovascular control centres in the brain, or a combination of two or more of these factors.”

“In recent years, adipose tissue has become recognized as a highly metabolically active organ. [For one example of a publication going into much more detail about these things, specifically in the context of cancer, see incidentally Kolonin et al.] […] The neuroendocrine system plays a critical role in energy metabolism and homeostasis and is implicated in the control of feeding behaviour […and for much more about this, see Redline and Berger’s book.] []. Fats are the main energy stores in the body. Fats provide the most efficient means of storing energy in terms of kJ/g, and the body can store seemingly unlimited amounts of fat […]. Carbohydrate constitutes <1% of energy stores, and tissues such as the brain are absolutely dependent on a constant supply of glucose, which must be supplied in the diet or by gluconeogenesis. Proteins contain about 20% of the body’s energy stores, but since proteins have a structural and functional role, their integrity is defended, except in fasting, and these stores are therefore not readily available. Circulating glucose can be considered as a glucose pool […], which is in a dynamic state of equilibrium, balancing the inflow and outflow of glucose. The sources of inflow are the diet (carbohydrates) and hepatic glycogenolysis. The outflows are to the tissues, for glycogen synthesis, for energy use, or, if plasma concentrations reach a sufficient level, into the urine. This level is not usually reached in normal, healthy people. […] Regulation of the glucose flows is through the action of endocrine hormones, these being epinephrine, growth hormone, insulin, glucagon, glucocorticoids and thyroxine. Insulin is the only hormone with a hypoglycaemic action, whereas all the others are hyperglycaemic, since they stimulate glycogenolysis. […] Integration of fat, carbohydrate and protein metabolism is essential for the effective control of the glucose pool. Two other pools are drawn upon for this, these being the free fatty acid (FFA) pool and the amino acid (AA) pool […] The FFA pool comprises the balance between dietary FFA absorbed from the GIT [gastrointestinal tract], FFA released from adipose tissue after lipolysis, and FFA entering the metabolic process. Insulin drives FFA into storage as lipids, while glucagon, growth hormone and epinephrine stimulate lipolysis [breakdown of fats]. The AA pool in the bloodstream comprises the balance between protein synthesis and the entry of amino acids into the gluconeogenic pathways.”

“In humans, food intake is determined by a number of factors, including the peripheral balance between usage and storage of energy, and by the brain, which through its appetite and satiety centres can trigger and terminate feeding behaviour […]. Leptin is secreted by human adipocytes but it may be more important (in the human) in the long-term maintenance of adequate energy stores during periods of energy deficit, rather than as a short-term satiety hormone. Feeding behaviour in humans can be initiated and sustained not only through hunger, but also through an awareness of the availability of especially palatable foods and by emotional states; the central mechanisms underlying this behaviour are poorly understood.”

October 2, 2014 Posted by | Books, Medicine | Leave a comment