Econstudentlog

Interactive Coding with “Optimal” Round and Communication Blowup

The youtube description of this one was rather longer than usual, and I decided to quote it in full below:

“The problem of constructing error-resilient interactive protocols was introduced in the seminal works of Schulman (FOCS 1992, STOC 1993). These works show how to convert any two-party interactive protocol into one that is resilient to constant-fraction of error, while blowing up the communication by only a constant factor. Since these seminal works, there have been many follow-up works which improve the error rate, the communication rate, and the computational efficiency. All these works assume that in the underlying protocol, in each round, each party sends a *single* bit. This assumption is without loss of generality, since one can efficiently convert any protocol into one which sends one bit per round. However, this conversion may cause a substantial increase in *round* complexity, which is what we wish to minimize in this work. Moreover, all previous works assume that the communication complexity of the underlying protocol is *fixed* and a priori known, an assumption that we wish to remove. In this work, we consider protocols whose messages may be of *arbitrary* lengths, and where the length of each message and the length of the protocol may be *adaptive*, and may depend on the private inputs of the parties and on previous communication. We show how to efficiently convert any such protocol into another protocol with comparable efficiency guarantees, that is resilient to constant fraction of adversarial error, while blowing up both the *communication* complexity and the *round* complexity by at most a constant factor. Moreover, as opposed to most previous work, our error model not only allows the adversary to toggle with the corrupted bits, but also allows the adversary to *insert* and *delete* bits. In addition, our transformation preserves the computational efficiency of the protocol. Finally, we try to minimize the blowup parameters, and give evidence that our parameters are nearly optimal. This is joint work with Klim Efremenko and Elad Haramaty.”

A few links to stuff covered/mentioned in the lecture:

Coding for interactive communication correcting insertions and deletions.
Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes.
Common reference string model.
Small-bias probability spaces: Efficient constructions and applications.
Interactive Channel Capacity Revisited.
Collision (computer science).
Chernoff bound.

Advertisements

September 6, 2017 Posted by | Computer science, Cryptography, Lectures, Mathematics | Leave a comment

Quantifying tumor evolution through spatial computational modeling

Two general remarks: 1. She talks very fast, in my opinion unpleasantly fast – the lecture would have been at least slightly easier to follow if she’d slowed down a little. 2. A few of the lectures uploaded in this lecture series (from the IAS Mathematical Methods in Cancer Evolution and Heterogeneity Workshop) seem to have some sound issues; in this lecture there are multiple 1-2 seconds long ‘chunks’ where the sound drops out and some words are lost. This is really annoying, and a similar problem (which was likely ‘the same problem’) previously lead me to quit another lecture in the series; however in this case I decided to give it a shot anyway, and I actually think it’s not a big deal; the sound-losses are very short in duration, and usually no more than one or two words are lost so you can usually figure out what was said. During this lecture there was incidentally also some issues with the monitor roughly 27 minutes in, but this isn’t a big deal as no information was lost and unlike the people who originally attended the lecture you can just skip ahead approximately one minute (that was how long it took to solve that problem).

A few relevant links to stuff she talks about in the lecture:

A Big Bang model of human colorectal tumor growth.
Approximate Bayesian computation.
Site frequency spectrum.
Identification of neutral tumor evolution across cancer types.
Using tumour phylogenetics to identify the roots of metastasis in humans.

August 22, 2017 Posted by | Cancer/oncology, Evolutionary biology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

How Species Interact

There are multiple reasons why I have not covered Arditi and Ginzburg’s book before, but none of them are related to the quality of the book’s coverage. It’s a really nice book. However the coverage is somewhat technical and model-focused, which makes it harder to blog than other kinds of books. Also, the version of the book I read was a hardcover ‘paper book’ version, and ‘paper books’ take a lot more work for me to cover than do e-books.

I should probably get it out of the way here at the start of the post that if you’re interested in ecology, predator-prey dynamics, etc., this book is a book you would be well advised to read; or, if you don’t read the book, you should at least familiarize yourself with the ideas therein e.g. through having a look at some of Arditi & Ginzburg’s articles on these topics. I should however note that I don’t actually think skipping the book and having a look at some articles instead will necessarily be a labour-saving strategy; the book is not particularly long and it’s to the point, so although it’s not a particularly easy read their case for ratio dependence is actually somewhat easy to follow – if you take the effort – in the sense that I believe how different related ideas and observations are linked is quite likely better expounded upon in the book than they might have been in their articles. The presumably wrote the book precisely in order to provide a concise yet coherent overview.

I have had some trouble figuring out how to cover this book, and I’m still not quite sure what might be/have been the best approach; when covering technical books I’ll often skip a lot of detail and math and try to stick to what might be termed ‘the main ideas’ when quoting from such books, but there’s a clear limit as to how many of the technical details included in a book like this it is possible to skip if you still want to actually talk about the stuff covered in the work, and this sometimes make blogging such books awkward. These authors spend a lot of effort talking about how different ecological models work and which sort of conclusions these different models may lead to in different contexts, and this kind of stuff is a very big part of the book. I’m not sure if you strictly need to have read an ecology textbook or two before you read this one in order to be able to follow the coverage, but I know that I personally derived some benefit from having read Gurney & Nisbet’s ecology text in the past and I did look up stuff in that book a few times along the way, e.g. when reminding myself what a Holling type 2 functional response is and how models with such a functional response pattern behave. ‘In theory’ I assume one might argue that you could theoretically look up all the relevant concepts along the way without any background knowledge of ecology – assuming you have a decent understanding of basic calculus/differential equations, linear algebra, equilibrium dynamics, etc. (…systems analysis? It’s hard for me to know and outline exactly which sources I’ve read in the past which helped make this book easier to read than it otherwise would have been, but suffice it to say that if you look at the page count and think that this will be an quick/easy read, it will be that only if you’ve read more than a few books on ‘related topics’, broadly defined, in the past), but I wouldn’t advise reading the book if all you know is high school math – the book will be incomprehensible to you, and you won’t make it. I ended up concluding that it would simply be too much work to try to make this post ‘easy’ to read for people who are unfamiliar with these topics and have not read the book, so although I’ve hardly gone out of my way to make the coverage hard to follow, the blog coverage that is to follow is mainly for my own benefit.

First a few relevant links, then some quotes and comments.

Lotka–Volterra equations.
Ecosystem model.
Arditi–Ginzburg equations. (Yep, these equations are named after the authors of this book).
Nicholson–Bailey model.
Functional response.
Monod equation.
Rosenzweig-MacArthur predator-prey model.
Trophic cascade.
Underestimation of mutual interference of predators.
Coupling in predator-prey dynamics: Ratio Dependence.
Michaelis–Menten kinetics.
Trophic level.
Advection–diffusion equation.
Paradox of enrichment. [Two quotes from the book: “actual systems do not behave as Rosensweig’s model predict” + “When ecologists have looked for evidence of the paradox of enrichment in natural and laboratory systems, they often find none and typically present arguments about why it was not observed”]
Predator interference emerging from trophotaxis in predator–prey systems: An individual-based approach.
Directed movement of predators and the emergence of density dependence in predator-prey models.

“Ratio-dependent predation is now covered in major textbooks as an alternative to the standard prey-dependent view […]. One of this book’s messages is that the two simple extreme theories, prey dependence and ratio dependence, are not the only alternatives: they are the ends of a spectrum. There are ecological domains in which one view works better than the other, with an intermediate view also being a possible case. […] Our years of work spent on the subject have led us to the conclusion that, although prey dependence might conceivably be obtained in laboratory settings, the common case occurring in nature lies close to the ratio-dependent end. We believe that the latter, instead of the prey-dependent end, can be viewed as the “null model of predation.” […] we propose the gradual interference model, a specific form of predator-dependent functional response that is approximately prey dependent (as in the standard theory) at low consumer abundances and approximately ratio dependent at high abundances. […] When density is low, consumers do not interfere and prey dependence works (as in the standard theory). When consumers density is sufficiently high, interference causes ratio dependence to emerge. In the intermediate densities, predator-dependent models describe partial interference.”

“Studies of food chains are on the edge of two domains of ecology: population and community ecology. The properties of food chains are determined by the nature of their basic link, the interaction of two species, a consumer and its resource, a predator and its prey.1 The study of this basic link of the chain is part of population ecology while the more complex food webs belong to community ecology. This is one of the main reasons why understanding the dynamics of predation is important for many ecologists working at different scales.”

“We have named predator-dependent the functional responses of the form g = g(N,P), where the predator density P acts (in addition to N [prey abundance, US]) as an independent variable to determine the per capita kill rate […] predator-dependent functional response models have one more parameter than the prey-dependent or the ratio-dependent models. […] The main interest that we see in these intermediate models is that the additional parameter can provide a way to quantify the position of a specific predator-prey pair of species along a spectrum with prey dependence at one end and ratio dependence at the other end:

g(N) <- g(N,P) -> g(N/P) (1.21)

In the Hassell-Varley and Arditi-Akçakaya models […] the mutual interference parameter m plays the role of a cursor along this spectrum, from m = 0 for prey dependence to m = 1 for ratio dependence. Note that this theory does not exclude that strong interference goes “beyond ratio dependence,” with m > 1.2 This is also called overcompensation. […] In this book, rather than being interested in the interference parameters per se, we use predator-dependent models to determine, either parametrically or nonparametrically, which of the ends of the spectrum (1.21) better describes predator-prey systems in general.”

“[T]he fundamental problem of the Lotka-Volterra and the Rosensweig-MacArthur dynamic models lies in the functional response and in the fact that this mathematical function is assumed not to depend on consumer density. Since this function measures the number of prey captured per consumer per unit time, it is a quantity that should be accessible to observation. This variable could be apprehended either on the fast behavioral time scale or on the slow demographic time scale. These two approaches need not necessarily reveal the same properties: […] a given species could display a prey-dependent response on the fast scale and a predator-dependent response on the slow scale. The reason is that, on a very short scale, each predator individually may “feel” virtually alone in the environment and react only to the prey that it encounters. On the long scale, the predators are more likely to be affected by the presence of conspecifics, even without direct encounters. In the demographic context of this book, it is the long time scale that is relevant. […] if predator dependence is detected on the fast scale, then it can be inferred that it must be present on the slow scale; if predator dependence is not detected on the fast scale, it cannot be inferred that it is absent on the slow scale.”

Some related thoughts. A different way to think about this – which they don’t mention in the book, but which sprang to mind to me as I was reading it – is to think about this stuff in terms of a formal predator territorial overlap model and then asking yourself this question: Assume there’s zero territorial overlap – does this fact mean that the existence of conspecifics does not matter? The answer is of course no. The sizes of the individual patches/territories may be greatly influenced by the predator density even in such a context. Also, the territorial area available to potential offspring (certainly a fitness-relevant parameter) may be greatly influenced by the number of competitors inhabiting the surrounding territories. In relation to the last part of the quote it’s easy to see that in a model with significant territorial overlap you don’t need direct behavioural interaction among predators for the overlap to be relevant; even if two bears never meet, if one of them eats a fawn the other one would have come across two days later, well, such indirect influences may be important for prey availability. Of course as prey tend to be mobile, even if predator territories are static and non-overlapping in a geographic sense, they might not be in a functional sense. Moving on…

“In [chapter 2 we] attempted to assess the presence and the intensity of interference in all functional response data sets that we could gather in the literature. Each set must be trivariate, with estimates of the prey consumed at different values of prey density and different values of predator densities. Such data sets are not very abundant because most functional response experiments present in the literature are simply bivariate, with variations of the prey density only, often with a single predator individual, ignoring the fact that predator density can have an influence. This results from the usual presentation of functional responses in textbooks, which […] focus only on the influence of prey density.
Among the data sets that we analyzed, we did not find a single one in which the predator density did not have a significant effect. This is a powerful empirical argument against prey dependence. Most systems lie somewhere on the continuum between prey dependence (m=0) and ratio dependence (m=1). However, they do not appear to be equally distributed. The empirical evidence provided in this chapter suggests that they tend to accumulate closer to the ratio-dependent end than to the prey-dependent end.”

“Equilibrium properties result from the balanced predator-prey equations and contain elements of the underlying dynamic model. For this reason, the response of equilibria to a change in model parameters can inform us about the structure of the underlying equations. To check the appropriateness of the ratio-dependent versus prey-dependent views, we consider the theoretical equilibrium consequences of the two contrasting assumptions and compare them with the evidence from nature. […] According to the standard prey-dependent theory, in reference to [an] increase in primary production, the responses of the populations strongly depend on their level and on the total number of trophic levels. The last, top level always responds proportionally to F [primary input]. The next to the last level always remains constant: it is insensitive to enrichment at the bottom because it is perfectly controled [sic] by the last level. The first, primary producer level increases if the chain length has an odd number of levels, but declines (or stays constant with a Lotka-Volterra model) in the case of an even number of levels. According to the ratio-dependent theory, all levels increase proportionally, independently of how many levels are present. The present purpose of this chapter is to show that the second alternative is confirmed by natural data and that the strange predictions of the prey-dependent theory are unsupported.”

“If top predators are eliminated or reduced in abundance, models predict that the sequential lower trophic levels must respond by changes of alternating signs. For example, in a three-level system of plants-herbivores-predators, the reduction of predators leads to the increase of herbivores and the consequential reduction in plant abundance. This response is commonly called the trophic cascade. In a four-level system, the bottom level will increase in response to harvesting at the top. These predicted responses are quite intuitive and are, in fact, true for both short-term and long-term responses, irrespective of the theory one employs. […] A number of excellent reviews have summarized and meta-analyzed large amounts of data on trophic cascades in food chains […] In general, the cascading reaction is strongest in lakes, followed by marine systems, and weakest in terrestrial systems. […] Any theory that claims to describe the trophic chain equilibria has to produce such cascading when top predators are reduced or eliminated. It is well known that the standard prey-dependent theory supports this view of top-down cascading. It is not widely appreciated that top-down cascading is likewise a property of ratio-dependent trophic chains. […] It is [only] for equilibrial responses to enrichment at the bottom that predictions are strikingly different according to the two theories”.

As the book does spend a little time on this I should perhaps briefly interject here that the above paragraph should not be taken to indicate that the two types of models provide identical predictions in the top-down cascading context in all cases; both predict cascading, but there are even so some subtle differences between the models here as well. Some of these differences are however quite hard to test.

“[T]he traditional Lotka-Volterra interaction term […] is nothing other than the law of mass action of chemistry. It assumes that predator and prey individuals encounter each other randomly in the same way that molecules interact in a chemical solution. Other prey-dependent models, like Holling’s, derive from the same idea. […] an ecological system can only be described by such a model if conspecifics do not interfere with each other and if the system is sufficiently homogeneous […] we will demonstrate that spatial heterogeneity, be it in the form of a prey refuge or in the form of predator clusters, leads to emergence of gradual interference or of ratio dependence when the functional response is observed at the population level. […] We present two mechanistic individual-based models that illustrate how, with gradually increasing predator density and gradually increasing predator clustering, interference can become gradually stronger. Thus, a given biological system, prey dependent at low predator density, can gradually become ratio dependent at high predator density. […] ratio dependence is a simple way of summarizing the effects induced by spatial heterogeneity, while the prey dependent [models] (e.g., Lotka-Volterra) is more appropriate in homogeneous environments.”

“[W]e consider that a good model of interacting species must be fundamentally invariant to a proportional change of all abundances in the system. […] Allowing interacting populations to expand in balanced exponential growth makes the laws of ecology invariant with respect to multiplying interacting abundances by the same constant, so that only ratios matter. […] scaling invariance is required if we wish to preserve the possibility of joint exponential growth of an interacting pair. […] a ratio-dependent model allows for joint exponential growth. […] Neither the standard prey-dependent models nor the more general predator-dependent models allow for balanced growth. […] In our view, communities must be expected to expand exponentially in the presence of unlimited resources. Of course, limiting factors ultimately stop this expansion just as they do for a single species. With our view, it is the limiting resources that stop the joint expansion of the interacting populations; it is not directly due to the interactions themselves. This partitioning of the causes is a major simplification that traditional theory implies only in the case of a single species.”

August 1, 2017 Posted by | Biology, Books, Chemistry, Ecology, Mathematics, Studies | Leave a comment

Melanoma therapeutic strategies that select against resistance

A short lecture, but interesting:

If you’re not an oncologist, these two links in particular might be helpful to have a look at before you start out: BRAF (gene) & Myc. A very substantial proportion of the talk is devoted to math and stats methodology (which some people will find interesting and others …will not).

July 3, 2017 Posted by | Biology, Cancer/oncology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

Harnessing phenotypic heterogeneity to design better therapies

Unlike many of the IAS lectures I’ve recently blogged this one is a new lecture – it was uploaded earlier this week. I have to say that I was very surprised – and disappointed – that the treatment strategy discussed in the lecture had not already been analyzed in a lot of detail and been implemented in clinical practice for some time. Why would you not expect the composition of cancer cell subtypes in the tumour microenvironment to change when you start treatment – in any setting where a subgroup of cancer cells has a different level of responsiveness to treatment than ‘the average’, that would to me seem to be the expected outcome. And concepts such as drug holidays and dose adjustments as treatment responses to evolving drug resistance/treatment failure seem like such obvious approaches to try out here (…the immunologists dealing with HIV infection have been studying such things for decades). I guess ‘better late than never’.

A few papers mentioned/discussed in the lecture:

Impact of Metabolic Heterogeneity on Tumor Growth, Invasion, and Treatment Outcomes.
Adaptive vs continuous cancer therapy: Exploiting space and trade-offs in drug scheduling.
Exploiting evolutionary principles to prolong tumor control in preclinical models of breast cancer.

June 11, 2017 Posted by | Cancer/oncology, Genetics, Immunology, Lectures, Mathematics, Medicine, Studies | Leave a comment

The Mathematical Challenge of Large Networks

This is another one of the aforementioned lectures I watched a while ago, but had never got around to blogging:

If I had to watch this one again, I’d probably skip most of the second half; it contains highly technical coverage of topics in graph theory, and it was very difficult for me to follow (but I did watch it to the end, just out of curiosity).

The lecturer has put up a ~500 page publication on these and related topics, which is available here, so if you want to know more that’s an obvious place to go have a look. A few other relevant links to stuff mentioned/covered in the lecture:
Szemerédi regularity lemma.
Graphon.
Turán’s theorem.
Quantum graph.

May 19, 2017 Posted by | Computer science, Lectures, Mathematics, Statistics | Leave a comment

Quantifying tradeoffs between fairness and accuracy in online learning

From a brief skim of this paper, which is coauthored by the guy giving this lecture, it looked to me like it covers many of the topics discussed in the lecture. So if you’re unsure as to whether or not to watch the lecture (…or if you want to know more about this stuff after you’ve watched the lecture) you might want to have a look at that paper. Although the video is long for a single lecture I would note that the lecture itself lasts only approximately one hour; the last 10 minutes are devoted to Q&A.

May 12, 2017 Posted by | Computer science, Economics, Lectures, Mathematics | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Random stuff

It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.

i. Autism Spectrum Disorder Grown Up: A Chart Review of Adult Functioning.

“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”

“[W]e evaluated the adult outcomes for 45 individuals diagnosed with ASD prior to age 18, and compared this with the functioning of 35 patients whose ASD was identified after 18 years. Concurrent mental illnesses were noted for both groups. […] Comparison of adult outcome within the group of subjects diagnosed with ASD prior to 18 years of age showed significantly poorer functioning for those with co-morbid Intellectual Disability, except in the domain of establishing intimate relationships [my emphasis. To make this point completely clear, one way to look at these results is that apparently in the domain of partner-search autistics diagnosed during childhood are doing so badly in general that being intellectually disabled on top of being autistic is apparently conferring no additional disadvantage]. Even in the normal IQ group, the mean total score, i.e. the sum of the 5 domains, was relatively low at 12.1 out of a possible 25. […] Those diagnosed as adults had achieved significantly more in the domains of education and independence […] Some authors have described a subgroup of 15–27% of adult ASD patients who attained more positive outcomes […]. Defining an arbitrary adaptive score of 20/25 as “Good” for our normal IQ patients, 8 of thirty four (25%) of those diagnosed as adults achieved this level. Only 5 of the thirty three (15%) diagnosed in childhood made the cutoff. (The cut off was consistent with a well, but not superlatively, functioning member of society […]). None of the Intellectually Disabled ASD subjects scored above 10. […] All three groups had a high rate of co-morbid psychiatric illnesses. Depression was particularly frequent in those diagnosed as adults, consistent with other reports […]. Anxiety disorders were also prevalent in the higher functioning participants, 25–27%. […] Most of the higher functioning ASD individuals, whether diagnosed before or after 18 years of age, were functioning well below the potential implied by their normal range intellect.”

Related papers: Social Outcomes in Mid- to Later Adulthood Among Individuals Diagnosed With Autism and Average Nonverbal IQ as Children, Adults With Autism Spectrum Disorders.

ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:

“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.

“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”

iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:

Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”

Some other related links below:

The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”

Adhesive Capsulitis (review article).
“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”

Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”

Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”

Rheumatic Manifestations of Diabetes Mellitus.

Prevalence of symptoms and signs of shoulder problems in people with diabetes mellitus.

Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”

“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”

Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”

It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).

iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…

v. Freedom of medicine. One quote from the conclusion of Cochran’s post:

“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”

I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:

“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

vi. Economic Costs of Diabetes in the U.S. in 2012.

“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at $4,394, $5,611, and $11,825.”

“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been $166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”

vii. Total red meat intake of ≥0.5 servings/d does not negatively influence cardiovascular disease risk factors: a systemically searched meta-analysis of randomized controlled trials.

viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:

“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”

ix. Some wikipedia links:

Heroic Age of Antarctic Exploration (featured).

Wien’s displacement law.

Kuiper belt (featured).

Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.[42][43][44][45][46][47]“).

Lymphatic filariasis.

File:World map of countries by number of cigarettes smoked per adult per year.

Australian gold rushes.

Savant syndrome (“It is estimated that 10% of those with autism have some form of savant abilities”). A small sidenote of interest to Danish readers: The Danish Broadcasting Corporation recently featured a series about autistics with ‘special abilities’ – the show was called ‘The hidden talents’ (De skjulte talenter), and after multiple people had nagged me to watch it I ended up deciding to do so. Most of the people in that show presumably had some degree of ‘savantism’ combined with autism at the milder end of the spectrum, i.e. Asperger’s. I was somewhat conflicted about what to think about the show and did consider blogging it in detail (in Danish?), but I decided against it. However I do want to add here to Danish readers reading along who’ve seen the show that they would do well to repeatedly keep in mind that a) the great majority of autistics do not have abilities like these, b) many autistics with abilities like these presumably do quite poorly, and c) that many autistics have even greater social impairments than do people like e.g. (the very likeable, I have to add…) Louise Wille from the show).

Quark–gluon plasma.

Simo Häyhä.

Chernobyl liquidators.

Black Death (“Over 60% of Norway’s population died in 1348–1350”).

Renault FT (“among the most revolutionary and influential tank designs in history”).

Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).

W Ursae Majoris variable.

Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).

Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).

Stimming.

Irish Civil War.

March 10, 2017 Posted by | Astronomy, autism, Cardiology, Diabetes, Economics, Epidemiology, Health Economics, History, Infectious disease, Mathematics, Medicine, Papers, Physics, Psychology, Random stuff, Wikipedia | Leave a comment

Random Stuff

i. On the youtube channel of the Institute for Advanced Studies there has been a lot of activity over the last week or two (far more than 100 new lectures have been uploaded, and it seems new uploads are still being added at this point), and I’ve been watching a few of the recently uploaded astrophysics lectures. They’re quite technical, but you can watch them and follow enough of the content to have an enjoyable time despite not understanding everything:


This is a good lecture, very interesting. One major point made early on: “the take-away message is that the most common planet in the galaxy, at least at shorter periods, are planets for which there is no analogue in the solar system. The most common kind of planet in the galaxy is a planet with a radius of two Earth radii.” Another big take-away message is that small planets seem to be quite common (as noted in the conclusions, “16% of Sun-like stars have an Earth-sized planet”).


Of the lectures included in this post this was the one I liked the least; there are too many (‘obstructive’) questions/interactions between lecturer and attendants along the way, and the interactions/questions are difficult to hear/understand. If you consider watching both this lecture and the lecture below, I would say that it would probably be wise to watch the lecture below this one before you watch this one; I concluded that in retrospect some of the observations made early on in the lecture below would have been useful to know about before watching this lecture. (The first half of the lecture below was incidentally to me somewhat easier to follow than was the second half, but especially the first half hour of it is really quite good, despite the bad start (which one can always blame on Microsoft…)).

ii. Words I’ve encountered recently (…or ‘recently’ – it’s been a while since I last posted one of these lists): Divagationsperiphrasis, reedy, architravesettpedipalp, tout, togs, edentulous, moue, tatty, tearaway, prorogue, piscine, fillip, sop, panniers, auxology, roister, prepossessing, cantle, catamite, couth, ordure, biddy, recrudescence, parvenu, scupper, husting, hackle, expatiate, affray, tatterdemalion, eructation, coppice, dekko, scull, fulmination, pollarding, grotty, secateurs, bumf (I must admit that I like this word – it seems fitting, somehow, to use that word for this concept…), durophagy, randy, (brief note to self: Advise people having children who ask me about suggestions for how to name them against using this name (or variants such as Randi), it does not seem like a great idea), effete, apricity, sororal, bint, coition, abaft, eaves, gadabout, lugubriously, retroussé, landlubber, deliquescence, antimacassar, inanition.

iii. “The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition. It is only with a combination of both rigorous formalism and good intuition that one can tackle complex mathematical problems; one needs the former to correctly deal with the fine details, and the latter to correctly deal with the big picture. Without one or the other, you will spend a lot of time blundering around in the dark (which can be instructive, but is highly inefficient). So once you are fully comfortable with rigorous mathematical thinking, you should revisit your intuitions on the subject and use your new thinking skills to test and refine these intuitions rather than discard them. One way to do this is to ask yourself dumb questions; another is to relearn your field.” (Terry Tao, There’s more to mathematics than rigour and proofs)

iv. A century of trends in adult human height. A figure from the paper (Figure 3 – Change in adult height between the 1896 and 1996 birth cohorts):

elife-13410-fig3-v1

(Click to view full size. WordPress seems to have changed the way you add images to a blog post – if this one is even so annoyingly large, I apologize, I have tried to minimize it while still retaining detail, but the original file is huge). An observation from the paper:

“Men were taller than women in every country, on average by ~11 cm in the 1896 birth cohort and ~12 cm in the 1996 birth cohort […]. In the 1896 birth cohort, the male-female height gap in countries where average height was low was slightly larger than in taller nations. In other words, at the turn of the 20th century, men seem to have had a relative advantage over women in undernourished compared to better-nourished populations.”

I haven’t studied the paper in any detail but intend to do so at a later point in time.

v. I found this paper, on Exercise and Glucose Metabolism in Persons with Diabetes Mellitus, interesting in part because I’ve been very surprised a few times by offhand online statements made by diabetic athletes, who had observed that their blood glucose really didn’t drop all that fast during exercise. Rapid and annoyingly large drops in blood glucose during exercise have been a really consistent feature of my own life with diabetes during adulthood. It seems that there may be big inter-individual differences in terms of the effects of exercise on glucose in diabetics. From the paper:

“Typically, prolonged moderate-intensity aerobic exercise (i.e., 30–70% of one’s VO2max) causes a reduction in glucose concentrations because of a failure in circulating insulin levels to decrease at the onset of exercise.12 During this type of physical activity, glucose utilization may be as high as 1.5 g/min in adolescents with type 1 diabetes13 and exceed 2.0 g/min in adults with type 1 diabetes,14 an amount that quickly lowers circulating glucose levels. Persons with type 1 diabetes have large interindividual differences in blood glucose responses to exercise, although some intraindividual reproducibility exists.15 The wide ranging glycemic responses among individuals appears to be related to differences in pre-exercise blood glucose concentrations, the level of circulating counterregulatory hormones and the type/duration of the activity.2

August 13, 2016 Posted by | Astronomy, Demographics, Diabetes, Language, Lectures, Mathematics, Physics, Random stuff | Leave a comment

Random stuff

I find it difficult to find the motivation to finish the half-finished drafts I have lying around, so this will have to do. Some random stuff below.

i.

(15.000 views… In some sense that seems really ‘unfair’ to me, but on the other hand I doubt neither Beethoven nor Gilels care; they’re both long dead, after all…)

ii. New/newish words I’ve encountered in books, on vocabulary.com or elsewhere:

Agleyperipeteia, disseverhalidom, replevinsocage, organdie, pouffe, dyarchy, tauricide, temerarious, acharnement, cadger, gravamen, aspersion, marronage, adumbrate, succotash, deuteragonist, declivity, marquetry, machicolation, recusal.

iii. A lecture:

It’s been a long time since I watched it so I don’t have anything intelligent to say about it now, but I figured it might be of interest to one or two of the people who still subscribe to the blog despite the infrequent updates.

iv. A few wikipedia articles (I won’t comment much on the contents or quote extensively from the articles the way I’ve done in previous wikipedia posts – the links shall have to suffice for now):

Duverger’s law.

Far side of the moon.

Preference falsification.

Russian political jokes. Some of those made me laugh (e.g. this one: “A judge walks out of his chambers laughing his head off. A colleague approaches him and asks why he is laughing. “I just heard the funniest joke in the world!” “Well, go ahead, tell me!” says the other judge. “I can’t – I just gave someone ten years for it!”).

Political mutilation in Byzantine culture.

v. World War 2, if you think of it as a movie, has a highly unrealistic and implausible plot, according to this amusing post by Scott Alexander. Having recently read a rather long book about these topics, one aspect I’d have added had I written the piece myself would be that an additional factor making the setting seem even more implausible is how so many presumably quite smart people were so – what at least in retrospect seems – unbelievably stupid when it came to Hitler’s ideas and intentions before the war. Going back to Churchill’s own life I’d also add that if you were to make a movie about Churchill’s life during the war, which you could probably relatively easily do if you were to just base it upon his own copious and widely shared notes, then it could probably be made into a quite decent movie. His own comments, remarks, and observations certainly made for a great book.

May 15, 2016 Posted by | Astronomy, Computer science, History, Language, Lectures, Mathematics, Music, Random stuff, Russia, Wikipedia | Leave a comment

A few lectures

Below are three new lectures from the Institute of Advanced Study. As far as I’ve gathered they’re all from an IAS symposium called ‘Lens of Computation on the Sciences’ – all three lecturers are computer scientists, but you don’t have to be a computer scientist to watch these lectures.

Should computer scientists and economists band together more and try to use the insights from one field to help solve problems in the other field? Roughgarden thinks so, and provides examples of how this might be done/has been done. Applications discussed in the lecture include traffic management and auction design. I’m not sure how much of this lecture is easy to follow for people who don’t know anything about either topic (i.e., computer science and economics), but I found it not too difficult to follow – it probably helped that I’ve actually done work on a few of the things he touches upon in the lecture, such as basic auction theory, the fixed point theorems and related proofs, basic queueing theory and basic discrete maths/graph theory. Either way there are certainly much more technical lectures than this one available at the IAS channel.

I don’t have Facebook and I’m not planning on ever getting a FB account, so I’m not really sure I care about the things this guy is trying to do, but the lecturer does touch upon some interesting topics in network theory. Not a great lecture in my opinion and occasionally I think the lecturer ‘drifts’ a bit, talking without saying very much, but it’s also not a terrible lecture. A few times I was really annoyed that you can’t see where he’s pointing that damn laser pointer, but this issue should not stop you from watching the video, especially not if you have an interest in analytical aspects of how to approach and make sense of ‘Big Data’.

I’ve noticed that Scott Alexander has said some nice things about Scott Aaronson a few times, but until now I’ve never actually read any of the latter guy’s stuff or watched any lectures by him. I agree with Scott (Alexander) that Scott (Aaronson) is definitely a smart guy. This is an interesting lecture; I won’t pretend I understood all of it, but it has some thought-provoking ideas and important points in the context of quantum computing and it’s actually a quite entertaining lecture; I was close to laughing a couple of times.

January 8, 2016 Posted by | Computer science, Economics, Game theory, Lectures, Mathematics, Physics | Leave a comment

Quotes

i. “By all means think yourself big but don’t think everyone else small” (‘Notes on Flyleaf of Fresh ms. Book’, Scott’s Last Expedition. See also this).

ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)

iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).

iv. “A first-rate theory predicts; a second-rate theory forbids and a third-rate theory explains after the event.” (Aleksander Isaakovich Kitaigorodski)

v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).

vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)

vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)

viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)

ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)

x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)

xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)

xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).

xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)

xiv. “There is more self-love than love in jealousy.” (-ll-)

xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)

xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)

xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)

xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)

xix. “Give every man thy ear, but few thy voice” (Shakespeare).

xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)

 

November 22, 2015 Posted by | Books, Mathematics, Medicine, Philosophy, Quotes/aphorisms, Science, Statistics | Leave a comment

The Nature of Statistical Evidence

Here’s my goodreads review of the book.

As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:

Loeve
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.

I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.

On how statistical methods are related to experimental science:

“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”

On mathematics and axiomatic systems (the book goes into much more detail than this):

“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”

It’s all about models:

“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”

But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…

“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]

What should we expect to get out of mathematical and statistical theories of inference?

“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”

On disagreements about the nature of statistical evidence:

“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”

Is mathematics a science?

“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”

Then what about statistics – is statistics a science?

“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”

I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.

Below I’ve added a few links to other stuff also covered in the book:
Propositional calculus.
Kolmogorov’s axioms.
Neyman-Pearson lemma.
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).

October 6, 2015 Posted by | Books, Mathematics, Papers, Philosophy, Science, Statistics | 4 Comments

A few lectures

This one was mostly review for me, but there was also some new stuff and it was a ‘sort of okay’ lecture even if I was highly skeptical about a few points covered. I was debating whether to even post the lecture on account of those points of contention, but I figured that by adding a few remarks below I could justify doing it. So below a few skeptical comments relating to content covered in the lecture:

a) 28-29 minutes in he mentions that the cutoff for hypertension in diabetics is a systolic pressure above 130. Here opinions definitely differ, and opinions about treatment cutoffs differ; in the annual report from the Danish Diabetes Database they follow up on whether hospitals and other medical decision-making units are following guidelines (I’ve talked about the data on the blog, e.g. here), and the BP goal of involved decision-making units evaluated is currently whether diabetics with systolic BP above 140 receive antihypertensive treatment. This recent Cochrane review concluded that: “At the present time, evidence from randomized trials does not support blood pressure targets lower than the standard targets in people with elevated blood pressure and diabetes” and noted that: “The effect of SBP targets on mortality was compatible with both a reduction and increase in risk […] Trying to achieve the ‘lower’ SBP target was associated with a significant increase in the number of other serious adverse events”.

b) Whether retinopathy screenings should be conducted yearly or biennially is also contested, and opinions differ – this is not mentioned in the lecture, but I sort of figure maybe it should have been. There’s some evidence that annual screening is better (see e.g. this recent review), but the evidence base is not great and clinical outcomes do not seem to differ much in general; as noted in the review, “Observational and economic modelling studies in low-risk patients show little difference in clinical outcomes between screening intervals of 1 year or 2 years”. To stratify based on risk seems desirable from a cost-effectiveness standpoint, but how to stratify optimally seems to not be completely clear at the present point in time.

c) The Somogyi phenomenon is highly contested, and I was very surprised about his coverage of this topic – ‘he’s a doctor lecturing on this topic, he should know better’. As the wiki notes: “Although this theory is well known among clinicians and individuals with diabetes, there is little scientific evidence to support it.” I’m highly skeptical, and I seriously question the advice of lowering insulin in the context of morning hyperglycemia. As observed in Cryer’s text: “there is now considerable evidence against the Somogyi hypothesis (Guillod et al. 2007); morning hyperglycemia is the result of insulin lack, not post-hypoglycemic insulin resistance (Havlin and Cryer 1987; Tordjman et al. 1987; Hirsch et al. 1990). There is a dawn phenomenon—a growth hormone–mediated increase in the nighttime to morning plasma glucose concentration (Campbell et al. 1985)—but its magnitude is small (Periello et al. 1991).”

I decided not to embed this lecture in the post mainly because the resolution is unsatisfactorily low so that a substantial proportion of the visual content is frankly unintelligible; I figured this would bother others more than it did me and that a semi-satisfactory compromise solution in terms of coverage would be to link to the lecture, but not embed it here. You can hear what the lecturer is saying, which was enough for me, but you can’t make out stuff like effect differences, p-values, or many of the details in the graphic illustrations included. Despite the title of the lecture on youtube, the lecture actually mainly consists of a brief overview of pharmacological treatment options for diabetes.

If you want to skip the introduction, the first talk/lecture starts around 5 minutes and 30 seconds into the video. Note that despite the long running time of this video the lectures themselves only take about 50 minutes in total; the rest of it is post-lecture Q&A and discussion.

October 3, 2015 Posted by | Diabetes, Lectures, Mathematics, Medicine, Nephrology, Pharmacology | Leave a comment

Mathematically Speaking

This is a book full of quotes on the topic of mathematics. As is always the case for books full of quotations, most of the quotes in this book aren’t very good, but occasionally you come across a quote or two that enable you to justify reading on. I’ll likely include some of the good/interesting quotes in the book in future ‘quotes’ posts. Below I’ve added some sample quotes from the book. I’ve read roughly three-fifths of the book so far and I’m currently hovering around a two-star rating on goodreads.

“Since authors seldom, if ever, say what they mean, the following glossary is offered to neophytes in mathematical research to help them understand the language that surrounds the formulas …

ANALOGUE. This is an a. of: I have to have some excuse for publishing it.
APPLICATIONS. This is of interest in a.: I have to have some excuse for publishing it.
COMPLETE. The proof is now c.: I can’t finish it. […]
DIFFICULT. This problem is d.: I don’t know the answer. (Cf. Trivial)
GENERALITY. Without loss of g.: I have done an easy special case. […]
INTERESTING. X’s paper is I.: I don’t understand it.
KNOWN. This is a k. result but I reproduce the proof for convenience of the reader: My paper isn’t long enough. […]
NEW. This was proved by X but the following n. proof may present points of interest: I can’t understand X.
NOTATION. To simplify the n.: It is too much trouble to change now.
OBSERVED. It will be o. that: I hope you have not noticed that.
OBVIOUS. It is o.: I can’t prove it.
READER. The details may be left to the r.: I can’t do it. […]
STRAIGHTFORWARD. By a s. computation: I lost my notes.
TRIVIAL. This problem is t.: I know the answer (Cf. Difficult).
WELL-KNOWN. The result is w.: I can’t find the reference.” (Pétard, H. [Pondiczery, E.S.]).

Here are a few quotes similar to the ones above, provided by a different, unknown source:
“BRIEFLY: I’m running out of time, so I’ll just write and talk faster. […]
HE’S ONE OF THE GREAT LIVING MATHEMATICIANS: He’s written 5 papers and I’ve read 2 of them. […]
I’VE HEARD SO MUCH ABOUT YOU: Stalling a minute may give me time to recall who you are. […]
QUANTIFY: I can’t find anything wrong with your proof except that it won’t work if x is a moon of Jupiter (popular in applied math courses). […]
SKETCH OF A PROOF: I couldn’t verify all the details, so I’ll break it down into the parts I couldn’t prove.
YOUR TALK WAS VERY INTERESTING: I can’t think of anything to say about your talk.” (‘Unknown’)

“Mathematics is neither a description of nature nor an explanation of its operation; it is not concerned with physical motion or with the metaphysical generation of quantities. It is merely the symbolic logic of possible relations, and as such is concerned with neither approximate nor absolute truth, but only with hypothetical truth. That is, mathematics determines which conclusions will follow logically from given premises. The conjunction of mathematics and philosophy, or of mathematics and science is frequently of great service in suggesting new problems and points of view.” (Carl Boyer)

“It’s the nature of mathematics to pose more problems than it can solve.” (Ivars Peterson)

“the social scientist who lacks a mathematical mind and regards a mathematical formula as a magic recipe, rather than as the formulation of a supposition, does not hold forth much promise. A mathematical formula is never more than a precise statement. It must not be made into a Procrustean bed […] The chief merit of mathematization is that it compels us to become conscious of what we are assuming.” (Bertrand de Jouvenel)

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” (Albert Einstein)

“[Mathematics] includes much that will neither hurt one who does not know it nor help one who does.” (J. B. Mencke)

“Pure mathematics consists entirely of asseverations to the extent that, if such and such a proposition is true of anything, then such and such another proposition is true of anything. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true … If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.” (Bertrand Russell)

“Mathematical rigor is like clothing; in its style it ought to suit the occasion, and it diminishes comfort and restricts freedom of movement if it is either too loose or too tight.” (G. F. Simmons).

“at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque, then the danger signal is up … In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.” (John von Neumann)

September 26, 2015 Posted by | Books, Mathematics, Quotes/aphorisms | Leave a comment

Random stuff/Open Thread

i. A lecture on mathematical proofs:

ii. “In the fall of 1944, only seven percent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point.”

From wikipedia’s article on Strategic bombing during WW2. The article has a lot of stuff. The ‘RAF estimates of destruction of “built up areas” of major German cities’ numbers in the article made my head spin – they didn’t bomb the Germans back to the stone age, but they sure tried. Here’s another observation from the article:

“After the war, the U.S. Strategic Bombing Survey reviewed the available casualty records in Germany, and concluded that official German statistics of casualties from air attack had been too low. The survey estimated that at a minimum 305,000 were killed in German cities due to bombing and estimated a minimum of 780,000 wounded. Roughly 7,500,000 German civilians were also rendered homeless.” (The German population at the time was roughly 70 million).

iii. Also war-related: Eddie Slovik:

Edward Donald “Eddie” Slovik (February 18, 1920 – January 31, 1945) was a United States Army soldier during World War II and the only American soldier to be court-martialled and executed for desertion since the American Civil War.[1][2]

Although over 21,000 American soldiers were given varying sentences for desertion during World War II, including 49 death sentences, Slovik’s was the only death sentence that was actually carried out.[1][3][4]

During World War II, 1.7 million courts-martial were held, representing one third of all criminal cases tried in the United States during the same period. Most of the cases were minor, as were the sentences.[2] Nevertheless, a clemency board, appointed by the Secretary of War in the summer of 1945, reviewed all general courts-martial where the accused was still in confinement.[2][5] That Board remitted or reduced the sentence in 85 percent of the 27,000 serious cases reviewed.[2] The death penalty was rarely imposed, and those cases typically were for rapes or murders. […] In France during World War I from 1917 to 1918, the United States Army executed 35 of its own soldiers, but all were convicted of rape and/or unprovoked murder of civilians and not for military offenses.[13] During World War II in all theaters of the war, the United States military executed 102 of its own soldiers for rape and/or unprovoked murder of civilians, but only Slovik was executed for the military offense of desertion.[2][14] […] of the 2,864 army personnel tried for desertion for the period January 1942 through June 1948, 49 were convicted and sentenced to death, and 48 of those sentences were voided by higher authority.”

What motivated me to read the article was mostly curiosity about how many people were actually executed for deserting during the war, a question I’d never encountered any answers to previously. The US number turned out to be, well, let’s just say it’s lower than I’d expected it would be. American soldiers who chose to desert during the war seem to have had much, much better chances of surviving the war than had soldiers who did not. Slovik was not a lucky man. On a related note, given numbers like these I’m really surprised desertion rates were not much higher than they were; presumably community norms (”desertion = disgrace’, which would probably rub off on other family members…’) played a key role here.

iv. Chess and infinity. I haven’t posted this link before even though the thread is a few months old, and I figured that given that I just had a conversation on related matters in the comment section of SCC (here’s a link) I might as well repost some of this stuff here. Some key points from the thread (I had to make slight formatting changes to the quotes because wordpress had trouble displaying some of the numbers, but the content is unchanged):

u/TheBB:
“Shannon has estimated the number of possible legal positions to be about 1043. The number of legal games is quite a bit higher, estimated by Littlewood and Hardy to be around 1010^5 (commonly cited as 1010^50 perhaps due to a misprint). This number is so large that it can’t really be compared with anything that is not combinatorial in nature. It is far larger than the number of subatomic particles in the observable universe, let alone stars in the Milky Way galaxy.

As for your bonus question, a typical chess game today lasts about 40­ to 60 moves (let’s say 50). Let us say that there are 4 reasonable candidate moves in any given position. I suspect this is probably an underestimate if anything, but let’s roll with it. That gives us about 42×50 ≈ 1060 games that might reasonably be played by good human players. If there are 6 candidate moves, we get around 1077, which is in the neighbourhood of the number of particles in the observable universe.”

u/Wondersnite:
“To put 1010^5 into perspective:

There are 1080 protons in the Universe. Now imagine inside each proton, we had a whole entire Universe. Now imagine again that inside each proton inside each Universe inside each proton, you had another Universe. If you count up all the protons, you get (1080 )3 = 10240, which is nowhere near the number we’re looking for.

You have to have Universes inside protons all the way down to 1250 steps to get the number of legal chess games that are estimated to exist. […]

Imagine that every single subatomic particle in the entire observable universe was a supercomputer that analysed a possible game in a single Planck unit of time (10-43 seconds, the time it takes light in a vacuum to travel 10-20 times the width of a proton), and that every single subatomic particle computer was running from the beginning of time up until the heat death of the Universe, 101000 years ≈ 1011 × 101000 seconds from now.

Even in these ridiculously favorable conditions, we’d only be able to calculate

1080 × 1043 × 1011 × 101000 = 101134

possible games. Again, this doesn’t even come close to 1010^5 = 10100000 .

Basically, if we ever solve the game of chess, it definitely won’t be through brute force.”

v. An interesting resource which a friend of mine recently shared with me and which I thought I should share here as well: Nature Reviews – Disease Primers.

vi. Here are some words I’ve recently encountered on vocabulary.com: augury, spangle, imprimatur, apperception, contrition, ensconce, impuissance, acquisitive, emendation, tintinnabulation, abalone, dissemble, pellucid, traduce, objurgation, lummox, exegesis, probity, recondite, impugn, viscid, truculence, appurtenance, declivity, adumbrate, euphony, educe, titivate, cerulean, ardour, vulpine.

May 16, 2015 Posted by | Chess, Computer science, History, Language, Lectures, Mathematics | Leave a comment

Wikipedia articles of interest

i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.

On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.

ii. Congruence (geometry).

iii. Geography and ecology of the Everglades (featured).

I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:

“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”

“A vast marshland could only have been formed due to the underlying rock formations in southern Florida.[15] The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”

“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”

“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks.[38] They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above.[42] The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks.[44] […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”

“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years.[81] Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.[82]

iv. Chang and Eng Bunker. Not a long article, but interesting:

Chang (Chinese: ; pinyin: Chāng; Thai: จัน, Jan, rtgsChan) and Eng (Chinese: ; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.[1][2][3]

I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”

A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).

v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:

Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology.[1][2] When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished.[3][4][5] The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.[1][3][4]

In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]

Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom.[8][43] Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing.[3][13][43] […] Genie’s father fed Genie as little as possible and refused to give her solid food […]

In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park.[13][20][56] Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.[3][56]

On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door.[3][56] The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]

Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs.[83][84] Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information.[62] She had very little endurance, only able to engage in any physical activity for brief periods of time.[85] […]

Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in.[86] She was also completely incontinent, and gave no response whatsoever to extreme temperatures.[48][87] As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids.[80][88] Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers.[50] She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.[83][84]

Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself.[102][103] These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”.[27][88][99] Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”

“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area.[3][20] In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.[3]

April 20, 2015 Posted by | Biology, Books, Botany, Ecology, Geography, History, Mathematics, Psychology, Wikipedia, Zoology | Leave a comment

A few lectures

(This was a review lecture for me as I read a textbook on these topics a few months back going into quite a lot more detail – the post I link to has some relevant links if you’re curious to explore this topic further).

A few relevant links: Group (featured), symmetry group, Cayley table, Abelian group, Symmetry groups of Platonic solids, dual polyhedron, Lagrange’s theorem (group theory), Fermat’s little theorem. I think he was perhaps trying to cover a little bit too much ground in too little time by bringing up the RSA algorithm towards the end, but I’m sort of surprised how many people disliked the video; I don’t think it’s that bad.

The beginning of the lecture has a lot of remarks about Fourier‘s life which are in some sense not ‘directly related’ to the mathematics, and so if this is what you’re most interested in knowing more about you can probably skip the first 11 minutes or so of the lecture without missing out on much. The lecture is very non-technical compared to coverage like this, this, and this (…or this).

I think one thing worth mentioning here is that the lecturer is the author of a rather amazing book on the topic he talks about in the lecture.

April 2, 2015 Posted by | History, Lectures, Mathematics | Leave a comment

Wikipedia articles of interest

i. Invasion of Poland. I recently realized I had no idea e.g. how long it took for the Germans and Soviets to defeat Poland during WW2 (the answer is 1 month and five days). The Germans attacked more than two weeks before the Soviets did. The article has lots of links, like most articles about such topics on wikipedia. Incidentally the question of why France and Britain applied a double standard and only declared war on Germany, and not the Soviet Union, is discussed in much detail in the links provided by u/OldWorldGlory here.

ii. Huaynaputina. From the article:

“A few days before the eruption, someone reported booming noise from the volcano and fog-like gas being emitted from its crater. The locals scrambled to appease the volcano, preparing girls, pets, and flowers for sacrifice.”

This makes sense – what else would one do in a situation like that? Finding a few virgins, dogs and flowers seems like the sensible approach – yes, you have to love humans and how they always react in sensible ways to such crises.

I’m not really sure the rest of the article is really all that interesting, but I found the above sentence both amusing and depressing enough to link to it here.

iii. Albert Pierrepoint. This guy killed hundreds of people.

On the other hand people were fine with it – it was his job. Well, sort of, this is actually slightly complicated. (“Pierrepoint was often dubbed the Official Executioner, despite there being no such job or title”).

Anyway this article is clearly the story of a guy who achieved his childhood dream – though unlike other children, he did not dream of becoming a fireman or a pilot, but rather of becoming the Official Executioner of the country. I’m currently thinking of using Pierrepoint as the main character in the motivational story I plan to tell my nephew when he’s a bit older.

iv. Second Crusade (featured). Considering how many different ‘states’ and ‘kingdoms’ were involved, a surprisingly small amount of people were actually fighting; the article notes that “[t]here were perhaps 50,000 troops in total” on the Christian side when the attack on Damascus was initiated. It wasn’t enough, as the outcome of the crusade was a decisive Muslim victory in the ‘Holy Land’ (Middle East).

v. 0.999… (featured). This thing is equal to one, but it can sometimes be really hard to get even very smart people to accept this fact. Lots of details and some proofs presented in the article.

vi. Shapley–Folkman lemma (‘good article’ – but also a somewhat technical article).

vii. Multituberculata. This article is not that special, but I add it here also because I think it ought to be and I’m actually sort of angry that it’s not; sometimes the coverage provided on wikipedia simply strikes me as grossly unfair, even if this is perhaps a slightly odd way to think about stuff. As pointed out in the article (Agustí points this out in his book as well), “The multituberculates existed for about 120 million years, and are often considered the most successful, diversified, and long-lasting mammals in natural history.” Yet notice how much (/little) coverage the article provides. Now compare the article with this article, or this.

February 25, 2015 Posted by | Biology, Economics, Evolutionary biology, History, Mathematics, Paleontology, Wikipedia, Zoology | 2 Comments