Econstudentlog

Beyond Significance Testing (V)

I never really finished my intended coverage of this book. Below I have added some observations from the last couple of chapters.

“Estimation of the magnitudes and precisions of interaction effects should be the focus of the analysis in factorial designs. Methods to calculate standardized mean differences for contrasts in such designs are not as well developed as those for one-way designs. Standardizers for single-factor contrasts should reflect variability as a result of intrinsic off-factors that vary naturally in the population, but variability due to extrinsic off-factors that do not vary naturally should be excluded. Measures of association may be preferred in designs with three or more factors or where some factors are random. […] There are multivariate versions of d statistics and measures of association for designs with two or more continuous outcomes. For example, a Mahalanobis distance is a multivariate d statistic, and it estimates the difference between two group centroids (the sets of all univariate means) in standard deviation units controlling for intercorrelation.”

“Replication is a foundational scientific activity but one neglected in the behavioral sciences. […] There is no single nomenclature to classify replication studies (e.g., Easley et al., 2000), but there is enough consensus to outline at least the broad types […] Internal replication includes statistical resampling and cross-validation by the original researcher(s). Resampling includes bootstrapping and related computer-based methods, such as the jackknife technique, that randomly combine the cases in an original data set in different ways to estimate the effect of idiosyncrasies in the sample on the results […] Such procedures are not replication in the usual scientific sense. the total sample in cross-validation is randomly divided into a derivation sample and a cross-validation sample, and the same analyses are conducted in each one. External replication is conducted by people other than the original researchers, and it involves new samples collected at different times or places.
There are two broad contexts for external replication. The first concerns different kinds of replications of experimental studies. One is exact replication, also known as direct replication, literal replication, or precise replication, where all major aspects of an original study — its sampling methods, design, and outcome measures — are closely copied. True exact replications exist more in theory than in practice because it is difficult to perfectly duplicate a study […] Another type is operational replication — also referred to as partial replication or improvisational replication — where just the sampling and methods of an original study are duplicated. […] The outcome of operational replication is potentially more informative than that of literal replication, because robust effects should stand out against variations in procedures, settings, or samples.
In balanced replication, operational replications are used as control conditions. Other conditions may represent the manipulation of additional substantive variables to test new hypotheses. […] The logic of balanced replication is similar to that of strong inference, which features designing studies to rule out competing explanations, and to that of dismantling research. The aim of the latter is to study elements of treatments with multiple components in smaller combinations to find the ones responsible for treatment efficacy.
A researcher who conducts a construct replication or conceptual replication avoids close imitation of the specific methods of an original study. An ideal construct replication would be carried out by telling a skilled researcher little more than the original empirical result. this researcher would then specify the design, measures, and data analysis methods deemed appropriate to test whether a finding has generality beyond the particular situation studied in an original work.”

“There is evidence that only small proportions — in some cases < 1% — of all published studies in the behavioral sciences are specifically described as replications (e.g., Easley et al., 2000; Kmetz, 2002). […] K. Hunt (1975), S. schmidt (2009), and others have argued that most replication in the behavioral sciences occurs covertly in the form of follow-up studies, which combine direct replication (or at least construct replication) with new procedures, measures, or hypotheses in the same investigation. Such studies may be described by their authors as “extensions” of previous works with new elements but not as “replications,” […] the problem with this informal approach to replication is that it is not explicit and therefore is unsystematic. […] Perhaps replication would be more highly valued if confidence intervals were reported more often. Then readers of empirical articles would be able to see the low precision with which many studies are conducted. […] Wide confidence intervals indicate that a study contains only limited information, a fact that is concealed when only results of statistical tests are reported”.

“Because sets of related investigations in the behavioral sciences are generally made up of follow-up studies, the explanation of observed variability in their results is a common goal in meta-analysis. That is, the meta-analyst tries to identify and measure characteristics of follow-up studies that give rise to variability among the results. These characteristics include attributes of samples (e.g., mean age, gender), settings in which cases are tested (e.g., inpatient vs. outpatient), and the type of treatment administered (e.g., duration, dosage). Other factors concern properties of the outcome measures (e.g., self-report vs. observational), quality of the research design, source of funding (e.g., private vs. public), professional backgrounds of the authors, or date of publication. The last reflects the potential impact of temporal factors such as changing societal attitudes. […] Study factors are conceptualized as meta-analytic predictors, and study outcome measured with the same standardized effect size is typically the criterion. Each predictor is actually a moderator variable, which implies interaction. This is because the criterion, study effect size, usually represents the association between the independent and dependent variables. If observed variation in effect sizes across a set of studies is explained by a meta-analytic predictor, the relation between the independent and dependent variables changes across the levels of that predictor. For the same reason, the terms moderator variable analysis and meta-regression describe the process of estimating whether study characteristics explain variability in results. […] study factors can covary, such as when different variations of a treatment tend to be administered to patients with acute versus chronic forms of a disorder. If meta-analytic predictors covary, it is necessary to control for overlapping explained proportions of variability in effect sizes.
It is also possible for meta-analytic predictors to interact, which means that they have a joint influence on observed effect sizes. Interaction also implies that to understand variability in results, one must consider the predictors together. This is a subtle point, one that requires some elaboration: Each individual predictor in meta-analysis is a moderator variable. But the relation of one meta-analytic predictor to study outcome may depend on another predictor. For example, the effect of treatment type on observed effect sizes may depend on whether cases with mild versus severe forms of an illness were studied. A different kind of phenomenon is mediation, or indirect effects among study factors. Suppose that one factor is degree of early exposure to a toxic agent and another is illness chronicity. The exposure factor may affect study outcome both directly and indirectly through its influence on chronicity. Indirect effects can be estimated in meta-analysis by applying techniques from structural equation modeling to covariance matrices of study factors and effect sizes pooled over related studies. the use of both techniques together is called mediational meta-analysis or model-driven meta-analysis. […] It is just as important in meta-analysis as when conducting a primary study to clearly specify the hypotheses and operational definitions of constructs.”

“There are ways to estimate in meta-analysis what is known as the fail-safe N, which is the number of additional studies where the average effect size is zero that would be needed to increase the p value in a meta-analysis for the test of the mean observed effect size to > .05 (i.e., the nil hypothesis is not rejected). These additional studies are assumed to be file drawer studies or to be otherwise not found in the literature search of a meta-analysis. If the estimated number of such studies is so large that it is unlikely that so many studies (e.g., 2,000) with a mean nil effect size could exist, more confidence in the results may be warranted. […] Studies from each source are subject to different types of biases. For example, bias for statistical significance implies that published studies have more H0 rejections and larger effect sizes than do unpublished studies […] There are techniques in meta-analysis for estimating the extent of publication bias […]. If such bias is indicated, a meta-analysis based mainly on published sources may be inappropriate.”

“For two reasons, it is crucial to assess the […] research quality for each found primary study. The first is to eliminate from further consideration studies so flawed that their results are untrustworthy. […] The other reason concerns the remaining (nonexcluded) studies, which may be divided into those that are well designed versus those with significant limitations. Results synthesized from the former group may be given greater weight in the analysis than those from the latter group. […] Relatively high proportions of found studies in meta-analyses are often discarded due to poor rated quality, a sad comment on the status of a research literature. […] It is probably best to see meta-analysis as a way to better understand the status of a research area than as an end in itself or some magical substitute for critical thought. Its emphasis on effect sizes and the explicit description of study retrieval methods and assumptions is an improvement over narrative literature reviews. It also has the potential to address hypotheses not directly tested in primary studies. […] But meta-analysis does not solve the replication crisis in the behavioral sciences.”

“Conventional meta-analysis and Bayesian analysis are both methods for research synthesis, and it is worthwhile to briefly summarize their relative strengths. Both methods accumulate evidence about a parameter of interest and generate confidence intervals for that parameter. Both methods also allow sensitivity analysis of the consequences of making different kinds of decisions that may affect the results. Because meta-analysis is based on traditional statistical methods, it tests basically the same kinds of hypotheses that are evaluated in primary studies with traditional statistical tests. This limits the kinds of questions that can be addressed in meta-analysis. For example, a standard meta-analysis cannot answer the question, What is the probability that treatment has an effect? It could be determined whether zero is included in the confidence interval based on the average effect size across a set of studies, but this would not address the question just posed. In contrast, there is no special problem in dealing with this kind of question in Bayesian statistics. A Bayesian approach takes into account both previous knowledge and the inherent plausibility of the hypothesis, but meta-analysis is concerned only with the former. It is possible to combine meta-analytical and Bayesian methods in the same analysis (see Howard et al., 2000).”

“Bayesian methods are no more magical than any other set of statistical techniques. One drawback is that there is no direct way in Bayesian estimation to control type I or type II errors regarding the dichotomous decision to reject or retain some hypothesis. Researchers can do so in traditional significance testing, but too often they ignore power (the complement of the probability of a type II error) or specify an arbitrary level of type I error (e.g., α = .05), so this capability is usually wasted. Specification of prior probabilities or prior distributions in Bayesian statistics affects estimates of their posterior counterparts. If these specifications are grossly wrong, the results could be meaningless […] assumptions in Bayesian analyses should be explicitly stated and thus open to scrutiny. Bowers and Davis (2012) criticized the application of Bayesian methods in neuroscience. They noted in particular that Bayesian methods offer little improvement over more standard statistical techniques, but they also noted problems with use of the former, such as the specification of prior probabilities or utility functions in ways that are basically arbitrary. As with more standard statistical methods, Bayesian techniques are not immune to misuse. […] The main point of this chapter — and that of the whole book — is [however] that there are alternatives to the unthinking overreliance on significance testing that has handicapped the behavioral sciences for so long.”

 

Advertisements

October 19, 2017 Posted by | Books, Statistics | Leave a comment

Words

Most of the words below are words which I encountered while reading Flashman and the Angel of the Lord and Flashman on the March.

Guerdon. Frowst. Dunnage. Veldt. Whelk. Tup. Gannet. Hawser. Doss-house. Brogue. Tucker. Voluptuary. Morion. Flawn. Ague. Fusee/Fuzee. Jimp. Anent. Skein. Fob.

Arbitrament. Whiffler. Abide. Beldam. Schiltron. Pickaninny/piccaninny. Gird/girt. Despond. Whittling. Glim. Peignoir. Gamp. Connubial. Ensconce. Confab. Trestle. Squawl. Paterfamilias. Dabble. Peal.

Buff. Duenna. Yawl. Palaver. Lateen. Felucca. Coracle. Gimlet. Tippet. Toggery. Dry-gulch. Nuncheon. Lovelock. Josser. Casque. Withy. Weir. Sonsy. Guzzle. Hearty.

Rattle. Pippin. Trencherman. Potation. Bilbo. Burly. Haulier. Roundelay. Lych-gate. Skilligalee/skilly. Labial. Dudgeon. Caravanserai. Mithridatism. Avast. Lagniappe. Thigmotaxis. Afforesting. Immiseration. Chamberlain.

October 11, 2017 Posted by | Books, Language | Leave a comment

Diabetes and the Brain (V)

I have blogged this book in some detail in the past, but I never really finished my intended coverage of the book. This post is an attempt to rectify this.

Below I have added some quotes and observations from some of the chapters I have not covered in my previous posts about the book. I bolded some key observations along the way.

A substantial number of studies have assessed the effect of type 2 diabetes on cognitive functioning with psychometric tests. The majority of these studies reported subtle decrements in individuals with type 2 diabetes relative to non-diabetic controls (2, 4). […] the majority of studies in patients with type 2 diabetes reported moderate reductions in neuropsychological test performance, mainly in memory, information-processing speed, and mental flexibility, a pattern that is also observed in aging-related cognitive decline. […] the observed cognitive decrements are relatively subtle and rather non-specific. […] All in all, disturbances in glucose and insulin metabolism and associated vascular risk factors are associated with modest reductions in cognitive performance in “pre-diabetic stages.” Consequently, it may well be that the cognitive decrements that can be observed in patients with type 2 diabetes also start to develop before the actual onset of the diabetes. […] Because the different vascular and metabolic risk factors that are clustered in the metabolic syndrome are strongly interrelated, the contribution of each of the individual factor will be difficult to assess.” 

“Aging-related changes on brain imaging include vascular lesions and focal and global atrophy. Vascular lesions include (silent) brain infarcts and white-matter hyperintensities (WMHs). WMHs are common in the general population and their prevalence increases with age, approaching 100% by the age of 85 (69). The prevalence of lacunar infarcts also increases with age, up to 5% for symptomatic infarcts and 30% for silent infarcts by the age of 80 (70). In normal aging, the brain gradually reduces in size, which becomes particularly evident after the age of 70 (71). This loss of brain volume is global […] age-related changes of the brain […] are often relatively more pronounced in older patients with type 2 […] A recent systematic review showed that patients with diabetes have a 2-fold increased risk of (silent) infarcts compared to non-diabetic persons (75). The relationship between type 2 diabetes and WMHs is subject to debate. […] there are now clear indications that diabetes is a risk factor for WMH progression (82). […] The presence of the APOE ε4 allele is a risk factor for the development of Alzheimer’s disease (99). Patients with type 2 diabetes who carry the APOE ε4 allele appeared to have a 2-fold increased risk of dementia compared to persons with either of these risk factors in isolation (100, 101).”

In adults with type 1 diabetes the occurrence of microvascular complications is associated with reduced cognitive performance (137) and accelerated cognitive decline (138). Moreover, type 1 diabetes is associated with decreased white-matter volume of the brain and diminished cognitive performance in particular in patients with retinopathy (139). Microvascular complications are also thought to play a role in the development of cognitive decline in patients with type 2 diabetes, but studies that have specifically examined this association are scarce. […] Currently there are no established specific treatment measures to prevent or ameliorate cognitive impairments in patients with diabetes.”

“Clinicians should be aware of the fact that cognitive decrements are relatively more common among patients with diabetes. […] it is important to note that cognitive complaints as spontaneously expressed by the patient are often a poor indicator of the severity of cognitive decrements. People with moderate disturbances may express marked complaints, while people with marked disturbances of cognition often do not complain at all. […] Diabetes is generally associated with relatively mild impairments, mainly in attention, memory, information-processing speed, and executive function. Rapid cognitive decline or severe cognitive impairment, especially in persons under the age of 60 is indicative of other underlying pathology. Potentially treatable causes of cognitive decline such as depression should be excluded. People who are depressed often present with complaints of concentration or memory.”

“Insulin resistance increases with age, and the organism maintains normal glucose levels as long as it can produce enough insulin (hyperinsulinemia). Some individuals are less capable than others to mount sustained hyperinsulinemia and will develop glucose intolerance and T2D (23). Other individuals with insulin resistance will maintain normal glucose levels at the expense of hyperinsulinemia but their pancreas will eventually “burn out,” will not be able to sustain hyperinsulinemia, and will develop glucose intolerance and diabetes (23). Others will continue having insulin resistance, may have or not have glucose intolerance, will not develop diabetes, but will have hyperinsulinemia and suffer its consequences. […] Elevations of adiposity result in insulin resistance, causing the pancreas to increase insulin to abnormal levels to sustain normal glucose, and if and when the pancreas can no longer sustain hyperinsulinemia, glucose intolerance and diabetes will ensue. However, the overlap between these processes is not complete (26). Not all persons with higher adiposity will develop insulin resistance and hyperinsulinemia, but most will. Not all persons with insulin resistance and hyperinsulinemia will develop glucose intolerance and diabetes, and this depends on genetic and other susceptibility factors that are not completely understood (25, 26). Some adults develop diabetes without going through insulin resistance and hyperinsulinemia, but it is thought that most will. The susceptibility to adiposity, that is, the risk of developing the above-described sequence in response to adiposity, varies by gender (4) and particularly by ethnicity. […] Chinese and Southeast Asians are more susceptible than Europeans to developing insulin resistance with comparable increases of adiposity (2).”

There is very strong evidence that adiposity, hyperinsulinemia, and T2D are related to cognitive impairment syndromes, whether AD [Alzheimer’s Disease], VD [Vascular Dementia], or MCI [Mild Cognitive Impairment], and whether the main mechanism is cerebrovascular disease or non-vascular mechanisms. However, more evidence is needed to establish causation. If the relation between these conditions and dementia were to be causal, the public health implications are enormous. […] Diabetes mellitus affects about 20% of adults older than 65 years of age […] two-thirds of the adult population in the United States are overweight or obese, and the short-term trend is for this to worsen. These trends are also being observed worldwide. […] We estimated that in New York City the presence of diabetes or hyperinsulinemia in elderly people could account for 39% of cases of AD (78).”

Psychiatric illnesses in general may be more common among persons with diabetes than in community-based samples, specifically affective and anxiety-related disorders (4). Persons with diabetes are twice as likely to have depression as non-diabetic persons (5). A review of 20 studies on the comorbidity of depression and diabetes found that the average prevalence was about 15%, and ranged from 8.5 to 40%, three times the rate of depressive disorders found in the general adult population of the United States (4–7). The rates of clinically significant depressive symptoms among persons with diabetes are even higher – ranging from 21.8 to 60.0% (8). Recent studies have indicated that persons with type II diabetes, accompanied by either major or minor depression, have significantly higher mortality rates than non-depressed persons with diabetes (9–10) […] A recent meta-analysis reported that patients with type 2 diabetes have a 2-fold increased risk of depression compared to non-diabetic persons (142). The prevalence of major depressive disorder in patients with type 2 diabetes was estimated at 11% and depressive symptoms were observed in 31% of the patients.” (As should be obvious from the above quotes the range of estimates vary a lot here, but the estimates tend to be high – US.)

Depression is an important risk factor for cardiovascular disease (Glassman, Maj & Sartorius is a decent book on these topics), and diabetes is also an established risk factor. Might this not lead to a hypothesis that diabetics who are depressed may do particularly poorly, with higher mortality rates and so on? Yes. …and it seems that this is also what people tend to find when they look at this stuff:

Persons with diabetes and depressive symptoms have mortality rates nearly twice as high as persons with diabetes and no depressive symptomatology (9). Persons with co-occurring medical illness and depression also have higher health care utilization leading to higher direct and indirect health care costs (12–13) […]. A meta-analysis of the relationship between depression and diabetes (types I and II) indicated that an increase in the number of depressive symptoms is associated with an increase in the severity and number of diabetic complications, including retinopathy, neuropathy, and nephropathy (15–17). Compared to persons with either diabetes or depression alone, individuals with co-occurring diabetes and depression have shown poorer adherence to dietary and physical activity recommendations, decreased adherence to hypoglycemic medication regimens, higher health care costs, increases in HgbA1c levels, poorer glycemic control, higher rates of retinopathy, and macrovascular complications such as stroke and myocardial infarction, higher ambulatory care use, and use of prescriptions (14, 18–22). Diabetes and depressive symptoms have been shown to have strong independent effects on physical functioning, and individuals experiencing either of these conditions will have worse functional outcomes than those with neither or only one condition (19–20). Nearly all of diabetes management is conducted by the patient and those with co-occurring depression may have poorer outcomes and increased risk of complications due to less adherence to glucose, diet, and medication regimens […] There is some evidence that treatment of depression with antidepressant and/or cognitive-behavioral therapies can improve glycemic control and glucose regulation without any change in the treatment for diabetes (27, 28) […] One important finding is [also] that treatment of depression seems to be able to halt atrophy of the hippocampus and may even lead to stimulation of neurogenesis of hippocampal cells (86).”

Diabetic neuropathy is a severe, disabling chronic condition that affects a significant number of individuals with diabetes. Long considered a disease of the peripheral nervous system, there is mounting evidence of central nervous system involvement. Recent advances in neuroimaging methods have led to a better understanding and refinement of how diabetic neuropathy affects the central nervous system. […] spinal cord atrophy is an early process being present not only in established-DPN [diabetic peripheral neuropathy] but also even in subjects with relatively modest impairments of nerve function (subclinical-DPN) […] findings […] show that the neuropathic process in diabetes is not confined to the peripheral nerve and does involve the spinal cord. Worryingly, this occurs early in the neuropathic process. Even at the early DPN stage, extensive and perhaps even irreversible damage may have occurred. […] it is likely that the insult of diabetes is generalised, concomitantly affecting the PNS and CNS. […] It is noteworthy that a variety of therapeutic interventions specifically targeted at peripheral nerve damage in DPN have thus far been ineffective, and it is possible that this may in part be due to inadequate appreciation of the full extent of CNS involvement in DPN.

Interestingly, if the CNS is also involved in the pathogenesis of (‘human’) diabetic neuropathy it may have some relevance to the complaint that some methods of diabetes-induction in animal models cause (secondary) damage to central structures in animal models – a complaint which I’ve previously made a note of e.g. in the context of my coverage of Horowitz & Samson’s book. The relevance of this depends quite a bit on whether it’s the same central structures that are affected in the animal models and in humans. It probably isn’t. These guys also discuss this stuff in some detail, though I won’t go into too much detail here. Some observations on related topics are however worth including here:

“Several studies examining behavioral learning have shown progressive deficits in diabetic rodents, whereas simple avoidance tasks are preserved. Impaired spatial learning and memory as assessed by the Morris water maze paradigm occur progressively in both the spontaneously diabetic BB/Worrat and STZ-induced diabetic rodents (1, 11, 12, 22, 41, 42). The cognitive components reflected by impaired Morris water maze performances involve problem-solving, enhanced attention and storage, and retrieval of information (43). […] Observations regarding cognition and plasticity in models characterized by hyperglycemia and insulin deficiency (i.e., alloxan or STZ-diabetes, BB/Wor rats, NOD-mice), often referred to as models of type 1 diabetes, are quite consistent. With respect to clinical relevance, it should be noted that the level of glycemia in these models markedly exceeds that observed in patients. Moreover, changes in cognition as observed in these models are much more rapid and severe than in adult patients with type 1 diabetes […], even if the relatively shorter lifespan of rodents is taken into account. […] In my view these models of “type 1 diabetes” may help to understand the pathophysiology of the effects of severe chronic hyperglycemia–hypoinsulinemia on the brain, but mimic the impact of type 1 diabetes on the brain in humans only to a limited extent.”

“Abnormalities in cognition and plasticity have also been noted in the majority of models characterized by insulin resistance, hyperinsulinemia, and (modest) hyperglycemia (e.g., Zucker fa/fa rat, Diabetic Zucker rat, db/db mouse, GK rat, OLETF rat), often referred to as models of type 2 diabetes. With regard to clinical relevance, it is important to note that although the endocrinological features of these models do mimic certain aspects of type 2 diabetes, the genetic defect that underlies each of them is not the primary defect encountered in humans with type 2 diabetes. Some of the genetic abnormalities that lead to a “diabetic phenotype” may also have a direct impact on the brain. […] some studies using these models report abnormalities in cognition and plasticity, even in the absence of hyperglycemia […] In addition, in the majority of available models insulin resistance and associated metabolic abnormalities develop at a relatively early age. Although this is practical for research purposes it needs to be acknowledged that type 2 diabetes is typically a disease of older age in humans. […] It is therefore still too early to determine the clinical significance of the available models in understanding the impact of type 2 diabetes on the brain. Further efforts into the development of a valid model are warranted.”

[A] key problem in clinical studies is the complexity and multifactorial nature of cerebral complications in relation to diabetes. Metabolic factors in patients (e.g., glucose levels, insulin levels, insulin sensitivity) are strongly interrelated and related to other factors that may affect the brain (e.g., blood pressure, lipids, inflammation, oxidative stress). Derangements in these factors in the periphery and the brain may be dissociated, for example, through the role of the blood–brain barrier, or adaptations of transport across this barrier, or through differences in receptor functions and post-receptor signaling cascades in the periphery and the brain. The different forms of treatments that patients receive add to the complexity. A key contribution of animal studies may be to single out individual components and study them in isolation or in combination with a limited number of other factors in a controlled fashion.

October 9, 2017 Posted by | Books, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology | Leave a comment

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Words

The words below are words which I encountered while reading the Rex Stout novels The Broken Vase, Double for Death, The Sound of Murder, Mountain Cat, and the Flashman/Fraser novels Flashman and the Dragon & Flashman at the Charge.

Asperity. Tantalus. Whizbang. Hammy. Regnant. Mordacity. Blotter. Quietus. Debouch. Acidulous. Aniline. Prolegomenon. Suasion. Spoor. Mangy. Clematis. Whittle. Palmistry. Carnality. Clangor.

Cerise. Coruscation. Fluster. Conviviality. Interstice. Chirography. Dub. Grubstake. Pilaster. Sagebrush. Pronghorn. Prognathous. Greensward. Palomino. Spelter. Puggle. Lorcha. Kampilan. Caulk. Cherub.

Thew. Effulgence. Poppet. Colander. Brolly. Bund. Pennon. Cove. Lamasery. Lamé. Patter. Gibber. Snickersnee. Blub. Beckon. Tog. Inveigle. Fuddle. Spoony. Roué.

Equerry. Gazette. Rig-out. Lashing. Clamber. Wainscot. Saunter. Tootle. Latterly. Serge. Redoubt. Charabanc. Indaba. Cess. Gotch. Bailiwick. Reveler. Exult. Hawse. Recreant.

September 27, 2017 Posted by | Books, Language | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

The fall of Rome

“According to the conventional view of things, the military and political disintegration of Roman power in the West precipitated the end of a civilization. Ancient sophistication died, leaving the western world in the grip of a ‘Dark Age’ of material and intellectual poverty, out of which it was only slowly to emerge. […] a much more comfortable vision of the end of empire [has been] spreading in recent years through the English-speaking world. […] There has been a sea change in the language used to describe post-Roman times. Words like ‘decline’ and ‘crisis’ […] have largely disappeared from historians’ vocabularies, to be replaced by neutral terms, like ‘transition’, ‘change’, and ‘transformation’. […] some historians in recent decades have also questioned the entire premiss that the dissolution of the Roman empire in the West was caused by hostile and violent invasion. […] some recent works […] present the theory of peaceful accommodation as a universally applicable model to explain the end of the Roman empire.”

Ward Perkins’ book is a work which sets out to show why he thinks those people are wrong, presenting along the way much evidence for widespread violence and disruption throughout the Western Empire towards the end. Despite the depressing topics covered therein I really enjoyed the book; Perkins spends a lot of time on material culture aspects and archaeological remains – it’s perhaps a telling fact that the book’s appendix deals with the properties of pottery and potsherds, and how important these kinds of material remains might be in terms of helping to make sense of things which happened in the far past. A general problem in a collapse setting is that when conditions deteriorate a lot, the sort of high-quality evidence that historians and archaeologists love to look at tend to disappear; censuses stop being taken (so you have to guess at how many people were around, instead of knowing it reasonably well – which can be particularly annoying if the disrupting factor was also killing people), innumeracy and illiteracy increase (translating to fewer written sources available), and so on. I should perhaps interpose that these sorts of issues do not just pertain to historical sources from the past; similar problems also arise in various analytical contexts today. Countries in a state of crisis (war, epidemics) tend to produce poor and questionable data, if any data can be gathered at all, a point I recall being covered in Newman & DeRouen’s book; related topics were also discussed in M’ikanatha & Iskander’s book as people working in public health sometimes face these problems as well (that work was of course focused on disease surveillance aspects, and in that context I might mention that the authors mentioned that poor data availability does not really necessarily mean that no data is ‘available’; for example in such settings (cheap) proxy data of various kinds may sometimes be usefully employed to inform resource allocation decisions, even if the use of such data would not be cost-effective or meaningful in a different setting). Another point of relevance is of course that some types of evidence survive the passage of time much better than others; pottery is much harder to destroy than low-quality parchment.

The point of looking at things like pottery and coins (a related topic I recall Webster covering in some detail in his book about The Roman Invasion of Britain) is not mainly that it’s super interesting to look at different types of pottery or coins – the point is that these types of material remains tend to be extremely informative about many things besides the artifacts themselves. Pottery was used for storing goods, and those goods aren’t around any longer but the pottery still is. And ‘pottery’ is not just ‘pottery’; different types of pottery required different levels of skill, and an important variable here is the level of standardization – Roman pottery was in general of high quality and was highly standardized; by examining e.g. clay content you can actually often tell where the pottery was made; specific producers produced pottery that was easily date-able. Coins were used for purchasing things and widespread use of them implies the existence of trading networks not relying on barter trade. Different coins had different values and there are important insights to be gathered from the properties of these artifacts; Joseph Tainter e.g. talks in his book about how the silver content of Roman coins gradually decreased over time, indicating at some periods that the empire was apparently undergoing quite severe inflation (the Roman military was compensated in coin, not goods, so by tweaking the amount of copper or silver in those coins the emperors could save a bit of money – which many of them did). If the amount of low-denomination coins drops a lot this might be an indication that people were reverting to barter trade. And so on. If you find some Roman coins in a field in Britain, it might mean that there used to be a Roman garrison there. If people used to use roof tiles and build buildings out of stone, rather than wood, and you observe that they stopped doing that, that’s also a clue that something changed.

A lot of the kind of evidence Perkins looks at in his book is to some extent indirect evidence, but the point is that there’s a lot of it, and if different sources tell roughly similar stories it sort of starts getting hard to argue against. To give a sense of the scale of the material remains available, one single source in Rome, Monte Testaccio, is made up entirely of broken oil amphorae imported to Rome from south-western Spain during the 2nd and 3rd century and is estimated to contain the remains of 53 million amphorae. An image of how the remains of one particular pottery manufacturer operating in Oxford in the 3rd and 4th century are distributed throughout Britain yield something like 100 different English sites where that pottery has been found. Again, the interesting thing here is not only the pottery itself, but also all the things people transported using those vessels, and all those other things (lost from the archaeological record) that might have been transported from A to B if they were willing to transport brittle pottery vessels that far around. And it’s very interesting to see distributions like that and then start comparing them with the sort of distributions you’ll get if you look for stuff produced, say, 200 years later. Coins, pottery, roof tiles, amphorae, animal bones (there’s evidence that Roman cows were larger than their Early Medieval counterparts), new construction (e.g. temples) – look at what people left behind, compare the evidence you get from the time of the Empire with what came after; this is a very big part of what Perkins does in his book.

While looking at the evidence it becomes obvious that some regions were more severely affected than others, and Perkins goes into those details as well. In general it seems that Britain was the most severely affected region, with other regions doing somewhat better; the timing also varied greatly. Greece (and much of the Eastern Empire) actually experienced a period of expansion (increased density of settlements, new churches and monasteries, stone rural houses) during the fifth century but around 600 AD the Aegean was severely hit and experienced severe disruption where former great cities became little but abandoned ghost towns. Perkins also takes care to deal with the ‘barbarians’ in at least some detail (Peter Heather covers that stuff in a lot more detail in his book Empires and Barbarians, if people are curious to know more about these topics), not lumping them all together into One Great Alliance to Take Down the Empire (quite often these guys were at war with each other). The evidence is presented in some detail, which also means that if you walk away from the book still thinking Perkins hasn’t made a good case for his beliefs, well, you’ll at least know where the author is coming from and why he holds the views he does.

I’ve added some more quotes from the book below. If you’re interested in these topics this book is a must read.

“The Germanic invaders of the western empire seized or extorted through the threat of force the vast majority of the territories in which they settled, without any formal agreement on how to share resources with their new Roman subjects. The impression given by some recent historians that most Roman territory was formally ceded to them as part of a treaty is quite simply wrong. Whenever the evidence is moderately full, as it is from the Mediterranean provinces, conquest or surrender to the threat of force was definitely the norm, not peaceful settlement. […] The experience of conquest was, of course, very varied across the empire. Some regions were overrun brutally but swiftly. […] Other regions, particularly those near the frontiers of the empire, suffered much more prolonged violence. […] Even those few regions that eventually passed relatively peacefully into Germanic control had all previously experienced invasion and devastation.”

“Throughout the time that the Roman empire existed, the soldiery of many towns were maintained at public expense for the defence of the frontier. When this practice fell into abeyance, both these troops and the frontier disappeared. […] It has rightly been observed that the deposition in 476 of the last emperor resident in Italy, Romulus Augustulus, caused remarkably little stir: the great historian of Antiquity, Momigliano, called it the ‘noiseless fall of an empire’.39 But the principal reason why this event passed almost unnoticed was because contemporaries knew that the western empire, and with it autonomous Roman power, had already disappeared in all but name. […] The story of the loss of the West is not a story of great set-piece battles, like Hadrianopolis, heroically lost by the Romans in the field. […] The West was lost mainly through failure to engage the invading forces successfully and to drive them back. This caution in the face of the enemy, and the ultimate failure to drive him out, are best explained by the severe problems that there were in putting together armies large enough to feel confident of victory. Avoiding battle led to a slow attrition of the Roman position, but engaging the enemy on a large scale would have risked immediate disaster […] Roman military dominance over the Germanic peoples was considerable, but never absolute and unshakable. […] even at the best of times, the edge that the Romans enjoyed over their enemies, through their superior equipment and organization, was never remotely comparable, say, to that of Europeans in the nineteenth century […] although normally the Romans defeated barbarians when they met them in battle, they could and did occasionally suffer disasters.”

“Italy suffered from the presence of large hostile armies in 401-2 (Alaric and the Goths), in 405-6 (Radagaisus), and again from 408 to 412 (Alaric, for the second time); Gaul was devastated in the years 407-9 by the Vandals, Alans, and Sueves; and the Iberian peninsula by the same peoples, from 409. The only regions of the western empire that had not been profoundly affected by violence by 410 were Africa and the islands of the Mediterranean […] Radagaisus’ incursion was successfully crushed, but it was immediately followed by a disastrous sequence of events: the crossing of the Rhine by Vandals, Sueves, and Alans at the very end of 406; the usurpation of Constantine III in 407, taking with him the resources of Britain and much of Gaul; and the Goths’ return to Italy in 408. […] Some of the lost territories were temporarily recovered in the second decade of the century; but much (the whole of Britain and a large part of Gaul and Spain) was never regained, and even reconquered provinces took many years to get back to full health […] the imperial recovery was only short-lived; in 429 it was brought definitely to an end by the successful crossing of the Vandals into Africa, and the devastation of the western empire’s last remaining secure tax base. […] There was, of course, a close connection between failure ‘abroad’ and the usurpations and rebellions ‘at home’. […] As in other periods of history, failure against foreign enemies and civil war were very closely linked, indeed feeding off each other.”

“Some accounts of the invasions [and maps of them] […] seem to be describing successive campaigns in a single war, with the systematic and progressive seizure of territory by the various armies of a united German coalition. If this had really been the case, the West would almost certainly have fallen definitely in the very early fifth century, and far less of the structures of imperial times would have survived into the post-Roman period. The reality was very much more messy and confused […] The different groups of incomers were never united, and fought each other, sometimes bitterly, as often as they fought the ‘Romans’ – just as the Roman side often gave civil strife priority over warfare against the invaders.35 When looked at in detail, the ‘Germanic invasions’ of the fifth century break down into a complex mosaic of different groups, some imperial, some local, and some Germanic, each jockeying for position against or in alliance with the others, with the Germanic groups eventually coming out on top. [As already mentioned, Heather is the book to read if you’re interested in these topics – US] […] Because the military position of the imperial government in the fifth century was weak, and because the Germanic invaders could be appeased, the Romans on occasion made treaties with particular groups, formally granting them territory on which to settle in return for their alliance. […] The interests of the centre when settling Germanic peoples, and those of the locals who had to live with the arrangements, certainly did not always coincide. […] The imperial government was entirely capable of selling its provincial subjects downriver, in the interests of short-term political and military gain. […] Sidonius Apollinaris, bishop of Clermont and a leader of the resistance to the Visigoths, recorded his bitterness: ‘We have been enslaved, as the price of other people’s security.41‘”

“[A]rchaeological evidence now available […] shows a startling decline in western standards of living during the fifth to seventh centuries.1 […] Ceramic vessels, of different shapes and sizes, play an essential part in the storage, preparation, cooking, and consumption of foodstuffs. They certainly did so in Roman times […] amphorae, not barrels, were the normal containers for transport and domestic storage of liquids. […] Pots are low-value, high-bulk items, with the additional disadvantage of being brittle […] and they are difficult and expensive to pack and transport, being heavy, bulky, and easy to break. If, despite these disadvantages, vessels (both fine tableware and more functional items) were being made to a high standard and in large quantities, and if they were travelling widely and percolating through even the lower levels of society – as they were in the Roman period – then it is much more likely than not that other goods, whose distribution we cannot document with the same confidence, were doing the same. […] There is, for instance, no reason to suppose that the huge markets in clothing, footware, and tools were less sophisticated than that in pottery. […] In the post-Roman West, almost all this material sophistication disappeared. Specialized production and all of the most local distribution became rare, unless for luxury goods; and the impressive range and quantity of high-quality functional goods, which had characterized the Roman period, vanished, or, at the very least, were drastically reduced. The middle and lower markets, which under the Romans had absorbed huge quantities of basic, but good-quality, items, seem to have almost entirely disappeared. […] There is no area of the post-Roman West that I know of where the range of pottery available in the sixth and seventh centuries matches that of the Roman period, and in most areas the decline in quality is startling. Furthermore, it was not only quality and diversity that declined; the overall quantities of pottery in circulation also fell dramatically. […] what had once been widely diffused products had become luxury items.”

“What we observe at the end of the Roman world is not a ‘recession’ […] with an essentially similar economy continuing to work at a reduced pace. Instead, what we see is a remarkable qualitative change, with the disappearance of entire industries and commercial networks. The economy of the post-Roman West is not that of the fourth century reduced in scale, but a very different and far less sophisticated entity.43 This is at its starkest and most obvious in Britain. A number of basic skills disappeared entirely during the fifth century, to be reintroduced only centuries later. […] All over Britain the art of making pottery on a wheel disappeared in the early fifth century, and was not reintroduced for almost 300 years. The potter’s wheel is not an instrument of cultural identity. Rather, it is a functional innovation that facilitates the rapid production of thin-walled ceramics; and yet it disappeared from Britain. […] post-Roman Britain in fact sank to a level of economic complexity well below that of the pre-Roman Iron Age. Southern Britain, in the years before the Roman conquest of AD 43, was importing quantities of Gaulish wine and Gaulish pottery; it had its own native pottery industries with regional distribution of their wares; it even had native silver coinages […] The settlement pattern of later iron-age Britain also reflects emerging economic complexity, with substantial coastal settlements […] which were at least partly dependent on trade. None of these features can be found reliably in fifth- and sixth-century post-Roman Britain. It is really only in about AD 700, three centuries after the disintegration of the Romano-British economy, that southern Britain crawled back to the level of economic complexity foudn in the pre-Roman Iron Age, with evidence of pots imported from the Continents, the first substantial and wheel-turned Anglo-Saxon pottery industry […], the striking of silver coins, and the emergence of coastal trading towns […] In the western Mediterranean, the economic regression was by no means as total as it was in Britain. […] But it must be remembered that in the Mediterranean world the level of economic complexity and sophistication reached in the Roman period was very considerably higher than anything ever attained in Britain. The fall in economic complexity may in fact have been as remarkable as that in Britain; but, since in the Mediterranean it started from a much higher point, it also bottomed out at a higher level. […] in some areas at least a very similar picture can be found to that sketched out above – of a regression, taking the economy way below levels of complexity reached in the pre-Roman period.”

“The enormity of the economic disintegration that occurred at the end of the empire was almost certainly a direct result of […] specialization. The post-Roman world reverted to levels of economic simplicity […] with little movement of goods, poor housing, and only the most basic manufactured items. The sophistication of the Roman period, by spreading high-quality goods widely in society, had destroyed the local skills and local networks that, in pre-Roman times, had provided lower-level economic complexity. It took centuries for people in the former empire to reacquire the skills and the regional networks that would take them back to these pre-Roman levels of sophistication. […] The Roman period is sometimes seen as enriching only the elite, rather than enhancing the standard of living of the population at large. […] I think this, and similar views, are mistaken. For, me, what is most striking about the Roman economy is precisely the fact that it was not solely an elite phenomenon, but one that made basic good-quality items available right down the social scale. […] good-quality pottery was widely available, and in regions like Italy even the comfort of tiled roofs. I would also seriously question the romantic assumption that economic simplicity meant a freer or more equal society.”

“There was no single moment, nor even a single century of collapse. The ancient economy disappeared at different times and at varying speeds across the empire. […] It was […] the fifth-century invasions that […] brought down the ancient economy in the West. However, this does not mean that the death of the sophisticated ancient world was intended by the Germanic peoples. The invaders entered the empire with a wish to share in its high standard of living, not to destroy it […] But, although the Germanic peoples did not intend it, their invasions, the disruptions these caused, and the consequent dismembering of the Roman state were undoubtedly the principal cause of death of the Roman economy.”

“Reading and writing (and a grounding in classical literature) were in Roman times an essential mark of status. […] illiterates amongst the Roman upper classes were very rare indeed. […] In a much simpler world, the urgent need to read and write declined, and with it went the social pressure on the secular elite to be literate. Widespread literacy in the post-Roman West definitely became confined to the clergy. […] It is a striking fact, and a major contrast with Roman times, that even great rulers could be illiterate in the early Middle Ages.”

“The changing perspectives of scholarship are always shaped in part by wider developments in modern society. There is inevitably a close connection between the way we view our own world and the way we interpret the past. […] [T]here is a real danger for the present day in a vision of the past that explicitly sets out to eliminate all crisis and all decline. The end of the Roman West […] destroyed a complex civilization, throwing the inhabitants of the West back to a standard of living typical of prehistoric times. Romans before the fall were as certain as we are today that their world would continue for ever substantially unchanged. They were wrong.”

 

September 18, 2017 Posted by | Archaeology, Books, History | Leave a comment

Quotes

i. “I’m opposed to any sport that reduces the coefficient of friction between me and the ground.” (Alan Kotok)

ii. “If God wanted us to believe in him, he’d exist.” (Linda Smith)

iii. “[A] man who contradicts himself may have succeeded in exercising his vocal chords. But from the point of view of imparting information, of communicating facts (or falsehoods) it is as if he had never opened his mouth. He utters words, but does not say anything.” (P. F. Strawson)

iv. “What is very important to me is two points: A theory should be internally consistent and it should have some contact with observation. Well, I’m told by all the experts that this theory [String theory] is internally consistent, although they think up new interpretations every time I turn my back. But contact with reality? Nobody’s given me anything. I just watch. I’m somewhat unhappy that so many people are working on it. To me, as a physicist, it’s sort of sad that so many people at the same time work at something that doesn’t seem to have any contact with experiment.” (Valentine Telegdi)

v. “By definition, the conventional wisdom of the day is widely accepted, continually reiterated and regarded not as ideology but as reality itself. Rebelling against “reality,” even when its limitations are clearly perceived, is always difficult. It means deciding things can be different and ought to be different; that your own perceptions are right and the experts and authorities wrong; that your discontent is legitimate and not merely evidence of selfishness, failure or refusal to grow up. […] rebels risk losing their jobs, failing in school, incurring the wrath of parents and spouses, suffering social ostracism. Often vociferous conservatism is sheer defensiveness: People are afraid to be suckers, […] to be branded bad or crazy.” (Ellen Willis)

vi. “If you want truth, you should begin by giving it.” (Lloyd Alexander)

vii. “All the greatest blessings are a source of anxiety, and at no time is fortune less wisely trusted than when it is best” (Seneca the Younger, On the shortness of life)

viii. “Killing oneself is, anyway, a misnomer. We don’t kill ourselves. We are simply defeated by the long, hard struggle to stay alive. When somebody dies after a long illness, people are apt to say, with a note of approval, “He fought so hard.” And they are inclined to think, about a suicide, that no fight was involved, that somebody simply gave up. This is quite wrong.” (Sally Brampton)

ix. “…to make the mistakes of youth is no crime, but not to learn from them is.” (Jim Butcher, Summer Knight)

x. “…a guest is a jewel on the cushion of hospitality” (Rex Stout, A right to die)

xi. ““Mr. Wolfe is in the middle of a fit. It’s complicated. There’s a fireplace in the front room, but it’s never lit because he hates open fires. He says they stultify mental processes. But it’s lit now because he’s using it. He’s seated in front of it, on a chair too small for him, tearing sheets out of a book and burning them. The book is the new edition, the third edition, of Webster’s New International Dictionary, Unabridged, published by the G. & C. Merriam Company of Springfield, Massachusetts. He considers it subversive because it threatens the integrity of the English language. In the past week he has given me a thousand examples of its crimes. He says it is a deliberate attempt to murder the— I beg your pardon. I describe the situation at length because he told me to bring you in there, and it will be bad. Even if he hears what you say, his mental processes are stultified. Could you come back later? After lunch he may be human.”
She was staring up at me. “He’s burning up a dictionary?”
“Right. That’s nothing. Once he burned up a cookbook because it said to remove the hide from a ham end before putting it in the pot with lima beans.” (Rex Stout, Gambit)

xii. “A friend in need is a friend to be avoided.” (David Gemmell)

xiii. “Virtually all ideologues, of any variety, are fearful and insecure, which is why they are drawn to ideologies that promise prefabricated answers for all circumstances.” (Jane Jacobs)

xiv. “To science, not even the bark of a tree or a drop of pond water is dull or a handful of dirt banal. They all arouse awe and wonder.” (-ll-)

xv. “Hydrogen is a light, odorless gas, which, given enough time, turns into people.” (Edward Robert Harrison)

xvi. “There is an obesity epidemic. One out of every three Americans… weighs as much as the other two.” (Richard Jeni)

xvii. “We learn, when we learn, only from experience, and then we only learn from our mistakes. Our successes only serve to reinforce our superstitions.” (Arthur Jones)

xviii. “How old am I? Old enough to know it’s impossible to change the thinking of fools, but young and foolish enough to keep on trying.” (-ll-)

xix. “There is no greater impotence in all the world like knowing you are right and that the wave of the world is wrong, yet the wave crashes upon you.” (Norman Mailer)

xx. “We never have any understanding of any subject matter except in terms of our own mental constructs of “things” and “happenings” of that subject matter.” (Douglas T. Ross)

September 14, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Sound

I gave the book two stars. As I was writing this post I was actually reconsidering, thinking about whether that was too harsh, whether the book deserved a third star. When I started out reading it I was assuming it would be a ‘physics book’ (I found it via browsing a list of physics books, so…), but that quickly turned out to be a mistaken assumption. There’s stuff about wave mechanics in there, sure, but this book also includes stuff about anatomy (a semi-detailed coverage of how the ear works), how musical instruments work, how bats use echolocation to find insects, and how animals who live underwater hear differently from the way we hear things. This book is really ‘all over the place’, which was probably part of why I didn’t like it as much as I might otherwise have. Lots of interesting stuff included, though – I learned quite a bit from this book.

I’ve added some quotes from the book below, and below the quotes I’ve added some links to stuff/concepts/etc. covered in the book.

“Decibels aren’t units — they are ratios […] To describe the sound of a device in decibels, it is vital to know what you are comparing it with. For airborne sound, the comparison is with a sound that is just hearable (corresponding to a pressure of twenty micropascals). […] Ultrasound engineers don’t care how much ‘louder than you can just about hear’ their ultrasound is, because no one can hear it in the first place. It’s power they like, and it’s watts they measure it in. […] Few of us care how much sound an object produces — what we want to know is how loud it will sound. And that depends on how far away the thing is. This may seem obvious, but it means that we can’t ever say that the SPL [sound pressure level] of a car horn is 90 dB, only that it has that value at some stated distance.”

“For an echo to be an echo, it must be heard more than about 1/ 20 of a second after the sound itself. If heard before that, the ear responds as if to a single, louder, sound. Thus 1/ 20 second is the auditory equivalent to the 1/ 5 of a second that our eyes need to see a changing thing as two separate images. […] Since airborne sounds travel about 10 metres in 1/ 20 second, rooms larger than this (in any dimension) are echo chambers waiting to happen.”

“Being able to hear is unremarkable: powerful sounds shake the body and can be detected even by single-celled organisms. But being able to hear as well as we do is little short of miraculous: we can quite easily detect a sound which delivers a power of 10−15 watts to the eardrums, despite the fact that it moves them only a fraction of the width of a hydrogen atom. Almost as impressive is the range of sound powers we can hear. The gap between the quietest audible sound level (the threshold of hearing, 0 dB) to the threshold of pain (around 130 dB) is huge: 130 dB is 1013 […] We can also hear a fairly wide range of frequencies; about ten octaves, a couple more than a piano keyboard. […] Our judgement of directionality, by contrast, is mediocre; even in favourable conditions we can only determine the direction of a sound’s source within about 10° horizontally or 20° vertically; many other animals can do very much better. […] Perhaps the most impressive of all our hearing abilities is that we can understand words whose levels are less than 10 per cent of that of background noise level (if that background is a broad spread of frequencies): this far surpasses any machine.”

“The nerve signals that emerge from the basilar membrane are not mimics of sound waves, but coded messages which contain three pieces of information: (a) how many nerve fibres are signalling at once, (b) how far along the basilar membrane those fibres are, and (c) how long the interval is between bursts of fibre signals. The brain extracts loudness information from a combination of (a) and (c), and pitch information from (b) and (c). […] The hearing system is a delicate one, and severe damage to the eardrums or ossicles is not uncommon. […] This condition is called conductive hearing loss. If damage to the inner ear or auditory nerve occurs, the result is sensorineural or ‘nerve’ hearing loss. It mostly affects higher frequencies and quieter sounds; in mild forms, it gives rise to a condition called recruitment, in which there is a sudden jump in the ‘hearability’ of sounds. A person suffering from recruitment and exposed to a sound of gradually increasing level can at first detect nothing and then suddenly hears the sound, which seems particularly loud. Hence the ‘there’s no need to shout’ protest in response to those who raise their voices just a little to make themselves heard on a second attempt. Sensorineural hearing loss is the commonest type, and its commonest cause is physical damage inflicted on the hair cells. […] About 360 million people worldwide (over 5 per cent of the global population) have ‘disabling’ hearing loss — that is, hearing loss greater than 40 dB in the better-hearing ear in adults and a hearing loss greater than 30 dB in the better-hearing ear in children […]. About one in three people over the age of sixty-five suffer from such hearing loss. […] [E]veryone’s ability to hear high-frequency sounds declines with age: newborn, we can hear up to 20 kHz, by the age of about forty this has fallen to around 16 kHz, and to 10 kHz by age sixty. Aged eighty, most of us are deaf to sounds above 8 kHz. The effect is called presbyacusis”.

“The acoustic reflex is one cause of temporary threshold shift (TTS), in which sounds which are usually quiet become inaudible. Unfortunately, the time the reflex takes to work […] is usually around 45 milliseconds, which is far longer than it takes an impulse sound, like a gunshot or explosion, to do considerable damage. […] Where the overburdened ear differs from other abused measuring instruments (biological and technological) is that it is not only the SPL of noise that matters: energy counts too. A noise at a level which would cause no more than irritation if listened to for a second can lead to significant hearing loss if it continues for an hour. The amount of TTS is proportional to the logarithm of the time for which the noise has been present — that is, doubling the exposure time more than doubles the amount. […] The amount of TTS reduces considerably if there is a pause in the noise, so if exposure to noise for long periods is unavoidable […], there is very significant benefit in removing oneself from the noisy area, if only for fifteen minutes.”

“Many highly effective technological solutions to noise have been developed. […] The first principle of noise control is to identify the source and remove it. […] Having dealt as far as possible with the noise source, the next step is to contain it. […] When noise can be neither avoided not contained, the next step is to keep its sources well separated from potential sufferers. One approach, used for thousands of years, is zoning: legislating for the restriction of noisy activities to particular areas, such as industrial zones, which are distant from residential districts. […] Where zone separation by distance is impracticable […], sound barriers are the main solution: a barrier that just cuts off the sight of a noise source will reduce the noise level by about 5 dB, and each additional metre will provide about an extra 1.5 dB reduction. […] Since barriers largely reflect rather than absorb, reflected sounds need consideration, but otherwise design and construction are simple, results are predictable, and costs are relatively low.”

“[T]he basic approaches to home sound reduction are simple: stop noise entering, destroy what does get in, and don’t add more to it yourself. There are three ways for sound to enter: via openings; by structure-borne vibration; and through walls, windows, doors, ceilings, and floors acting as diaphragms. In all three cases, the main point to bear in mind is that an acoustic shell is only as good as its weakest part: just as even a small hole in an otherwise watertight ship’s hull renders the rest useless, so does a single open window in a double-glazed house. In fact, the situation with noise is much worse than with water due to the logarithmic response of our ears: if we seal one of two identical holes in a boat we will halve the inflow. If we close one of two identical windows into a house, […] that 50 per cent reduction in acoustic intensity is only about a 2 per cent reduction in loudness. The second way to keep noise out is double glazing, since single-glazed windows make excellent diaphragms. Structure-borne sound is a much greater challenge […] One inexpensive, adaptable, and effective solution […] is the hanging of heavy velour drapes, with as many folds as possible. If something more drastic is required, it is vital to involve an expert: while an obvious solution is to thicken walls, it’s important to bear in mind that doubling thickness reduces transmission loss by only 6 dB (a sound power reduction of about three-quarters, but a loudness reduction of only about 40 per cent). This means that solid walls need to be very thick to work well. A far better approach is the use of porous absorbers and of multi-layer constructions. In a porous absorber like glass fibre, higher-frequency sound waves are lost through multiple reflections from the many internal surfaces. […] A well-fitted acoustically insulated door is also vital. The floor should not be neglected: even if there are no rooms beneath, hard floors are excellent both at generating noise when walked on and in transmitting that noise throughout the building. Carpet and underlay are highly effective at high frequencies but are almost useless at lower ones […] again there is no real alternative to bringing in an expert.”

“There are two reasons for the apparent silence of the sea: one physical, the other biological. The physical one is the impedance mismatch between air and water, in consequence of which the surface acts as an acoustic mirror, reflecting back almost all sound from below, so that land-dwellers hear no more than the breaking of the waves. […] underwater, the eardrum has water on one side and air on the other, and so impedance mismatching once more prevents most sound from entering. If we had no eardrums (nor air-filled middle ears) we would probably hear very well underwater. Underwater animals don’t need such complicated ears as ours: since the water around them is a similar density to their flesh, sound enters and passes through their whole bodies easily […] because the velocity of sound is about five times greater in water than in air, the wavelength corresponding to a particular frequency is also about five times greater than its airborne equivalent, so directionality is harder to come by.”

“Although there is little that electromagnetic radiation does above water that sound cannot do below it, sound has one unavoidable disadvantage: its velocity in water is much lower than that of electromagnetic radiation in air […]. Also, when waves are used to send data, the rate of that data transmission is directly proportional to the wave frequency — and audio sound waves are around 1,000 times lower in frequency than radio waves. For this reason ultrasound is used instead, since its frequencies can match those of radio waves. Another advantage is that it is easier to produce directional beams at ultrasonic frequencies to send the signal in only the direction you want. […] The distances over which sound can travel underwater are amazing. […] sound waves are absorbed far less in water than in air. At 1 kHz, absorption is about 5 dB/ km in air (at 30 per cent humidity) but only 0.06 dB/ km in seawater. Also, underwater sound waves are much more confined; a noise made in mid-air spreads in all directions, but in the sea the bed and the surface limit vertical spreading. […] The range of sound velocities underwater is [also] far larger than in air, because of the enormous variations in density, which is affected by temperature, pressure, and salinity […] somewhere under all oceans there is a layer at which sound velocity is low, sandwiched between regions in which it is higher. By refraction, sound waves from both above and below are diverted towards the region of minimum sound velocity, and are trapped there. This is the deep sound channel, a thin spherical shell extending through the world’s oceans. Since sound waves in the deep sound channel can move only horizontally, their intensity falls in proportion only to the distance they travel, rather than to the square of the distance, as they would in air or in water at a single temperature (in other words, they spread out in circles, not spheres). Sound absorption in the deep sound channel is very low […] and sound waves in the deep channel can readily circumnavigate the Earth.”

Links:

Sound.
Neuromast.
Monochord.
Echo.
Pierre-Simon Laplace.
Sonar.
Foley.
Long Range Acoustic Device.
Physics of sound.
Speed of sound.
Shock wave.
Doppler effect.
Acoustic mirror.
Acoustic impedance.
Snell’s law.
Diffraction grating.
Interference (wave propagation).
Acousto-optic effect.
Sound pressure.
Sound intensity.
Square-cube law.
Decibel.
Ultrasound.
Sound level meter.
Phon.
Standing wave.
Harmonic.
Resonance.
Helmholtz resonance.
Phonautograph.
Spectrogram.
Fourier series/Fourier transform/Fast Fourier transform.
Equalization (audio).
Absolute pitch.
Consonance and dissonance.
Pentatonic scale.
Major and minor.
Polyphony.
Rhytm.
Pitched percussion instrument/Unpitched percussion instrument.
Hearing.
Ear/pinna/tympanic membrane/Eustachian tube/Middle ear/Inner ear/Cochlea/Organ of Corti.
Otoacoustic emission.
Broca’s area/primary auditory cortex/Wernicke’s area/Haas effect.
Conductive hearing loss/Sensorineural hearing loss.
Microphone/Carbon microphone/Electret microphone/Ribbon microphone.
Piezoelectric effect.
Loudspeaker.
Missing fundamental.
Huffman coding.
Animal echolocation.
Phonon.
Infrasound.
Hydrophone.
Deep sound channel.
Tonpilz.
Stokes’ law of sound attenuation.
Noise.
Acoustic reflex.
Temporary threshold shift.
Active noise cancellation.
Sabine equation.

September 14, 2017 Posted by | Books, Physics | Leave a comment

Words

Most of the words below are words which I encountered while reading the Rex Stout books: Too Many Clients, The Final Deduction, Homicide Trinity, Gambit, The Mother Hunt, Trio for Blunt Instruments, A Right to Die, The Doorbell Rang, Death of a Doxy, The Father Hunt, Death of a Dude, Please Pass the Guilt, A Family Affair, Death Times Three, and Red Threads.

Commissure. Nonfeasance. Bodice. Binnacle. Episiotomy. Amplexus. Bayou. Jetty. Crisper. Conurbation. Splotch. Tarradiddle. Lamia. Prink/primp. Thaumaturgy/thaumaturge. Allspice. Panjandrum. Subdulous. Overweening. Perspicacity.

Jejune. Hamper. Cloche. Ulster. Bevel. Auto-da-fé. Buckram. Peccant. Fatuity. Dissension. Chipper. Analeptic. Cluck. Moll. Posy. Peeve. Wrangle. Chervil. Wile. Vim.

Huffy. Callow. Crabby. Locution. Scrapple. Jamb. Cockatrice. Wink. Spatter. Sororicide. Discomfiture. Diphthong. Twaddle. Rassle. Headcheese. Flimflam. Brioche. Doxy. Mooch. Incumbency.

Cogitable. Punctilio. Mantic. Frowzy. Burgoo. Boodle. Toplofty. Ell. Slue. Fulcrum. Piffle. Amphigoric. Subreption. Cynosure. ConcupiscenceCarceral. Descant. Pretermit. Hickory. Ingénue.

September 13, 2017 Posted by | Books, Language | Leave a comment

Gastrointestinal Function in Diabetes Mellitus (III)

Below some observations from chapters 5 and 6.

“The major functions of the small intestine are to digest and absorb nutrients, while those of the large bowel are to extract water and process faeces before expulsion. Diabetes mellitus may be associated with both small intestinal and colonic dysfunction, potentially resulting in a wide range of clinical manifestations, including gastrointestinal symptoms, poor nutritional status and impaired glycaemic control. […] The prevalence of small intestinal and colonic dysfunction in diabetes has not been formally evaluated and remains uncertain. However, small intestinal motor abnormalities are evident in about 80% of patients with diabetic gastroparesis, suggesting that the prevalence of intestinal dysmotility is likely to be comparable to the prevalence of gastroparesis in diabetic patients, i.e. 30–50% of unselected patients [1–6]. […] symptoms resulting from intestinal dysfunction are not cause-specific and are heterogeneous, potentially giving rise to diverse complaints, including anorexia, nausea, vomiting, constipation, diarrhoea and abdominal pain or discomfort. […] Transport of chyme through the small intestine is closely linked to intraluminal digestion and absorption of nutrients. The efficacy of absorption of nutrients is, therefore, potentially affected by dysmotility of the small intestine observed in diabetes, and by alterations in the transport mechanisms facilitating nutrient uptake across the intestinal membrane.”

“After meal ingestion, food is initially stored in the proximal stomach, then triturated in the distal stomach, and finally transported to the small intestine […]. The major functions of the small intestine are to mix and propel food particles in order to optimise intraluminal digestion and absorption. Those food particles that escape absorption, as well as indigestible solids, are transported to the colon, where water is extracted and faeces processed before expulsion. The motility patterns of the small intestine and colon are designed to efficiently serve these functions of controlled mixing and transport. When the small intestine is not exposed to nutrients, it exhibits a cyclic pattern of motility […] termed the migrating motor complex (MMC). […] The major function of the colon is to absorb water and electrolytes in order to concentrate and solidify the intraluminal content. Colonic motility plays an important role in these processes. In contrast to small intestinal motility, colonic motility follows a diurnal rhythm, with relative motor quiescence during sleep [55,56]. […] Transit and absorption of intestinal contents are regulated by the autonomic and enteric nervous systems. […] Numerous neuropeptides have been shown to play an important role in controlling the smooth muscle function of the small intestine and colon […] studies using experimental animal models of diabetes have shown altered activity of many neurotransmitters known to be of importance in preserving the integrity of intestinal motility […] Recently, the so-called interstitial cells of Cajal have been identified in the gastrointestinal tract [64–66] and appear to be responsible for the generation of the slow wave activity present in the entire gastrointestinal tract. […] The interplay between the enteric nervous system and the interstitial cells of Cajal is essential for normal gut motility.”

“[N]europathy of the autonomic (vagal and sympathetic) and enteric nerves may result in intestinal dysmotility. Autonomic neuropathy at the level of the gut can be assessed using cardiac autonomic nerve (CAN) function tests as a surrogate marker […] at present CAN function tests are the best tests available in the clinical situation. Studies using CAN function tests to assess involvement of the autonomic nerve system indicate that in patients with CAN the prevalence and severity of dysmotility of the small intestine and colon is substantially greater when compared to patients with normal CAN function. […] there is evidence that intestinal secretion may be abnormal in diabetes, due to increased secretion of fluids in response to a meal, rather than an increased basal secretory state [176]. […] These observations suggest that progressive neuropathy of the enteric and autonomic nervous system is likely to be responsible for the impaired intestinal secretion, rather than hyperglycaemia.”

“Studies that have investigated small intestinal motility in diabetes mellitus have revealed a wide spectrum of motor patterns, ranging from normal to grossly abnormal motility […] Postprandial small intestinal motor abnormalities include early recurrence of phase III and burst activity […] Both […] are thought to indicate neuropathic changes in either the intrinsic or extrinsic innervation of the gut. […] The data relating to colonic function in patients with diabetes mellitus are even more limited than those that exist for the small intestine […] [Some results suggest that] symptoms may not be a good indicator of the presence or absence of delayed colonic transit in diabetic patients.”

“There is little or no evidence that diabetes per se affects protein absorption to a clinically relevant extent. However, when diabetes mellitus is associated with severe pancreatic insufficiency […], coeliac disease […] or bacterial overgrowth, malabsorption of protein may occur. […] Since lipid absorption is dependent on the interplay of several organs (small intestine, pancreas, liver, gall bladder), diabetes mellitus has the potential to be associated with fat malabsorption […] Although it is not known whether small intestinal dysmotility per se can lead to fat malabsorption, it certainly can when the dysmotility is associated with bacterial overgrowth [160,161]. […] Recently, drug-induced malabsorption of fat has become a treatment option in diabetes mellitus. The inhibition of pancreatic lipase activity by orlistat prevents the hydrolysis of triglycerides, resulting in fat malabsorption. This approach has been reported to improve glycaemic control in type 2 diabetes”.

“The superior and inferior mesenteric arteries supply blood to the small and large intestine, while the superior, middle and inferior rectal arteries provide the arterial blood supply of the rectum. About 25% of the cardiac output in the fasting state circulates through the splanchnic arteries […] Animal models of diabetes are associated with abnormalities of neurotransmitters in the mesenteric veins and arteries […] Human diabetes may be associated with abnormalities in mesenteric blood flow. In diabetic patients with autonomic neuropathy, preprandial superior mesenteric arterial blood flow is greater than that in both control subjects and patients without autonomic neuropathy […] patients with autonomic dysfunction […] are at particular risk of postprandial hypotension and often exhibit a marked fall in systemic blood pressure after a meal […] the magnitude of the postprandial fall in blood pressure is dependent on meal composition (glucose has the greatest effect) and the rate of nutrient entry into the small intestine [196]. […] Patients with diabetes mellitus also frequently report symptoms attributable to orthostatic hypotension. A large survey of type 1 diabetes mellitus reported that the frequency of feeling faint on standing was 18% [200]. Symptomatic orthostatic hypotension in diabetic patients has been shown to be related to cardiovascular autonomic neuropathy”.

“Disordered defaecation, characterised by incontinence, constipation and diarrhoea, occurs frequently in patients with diabetes mellitus [1–3] but is often overlooked as a cause of morbidity. For example, in a study of 136 unselected diabetic outpatients referred to a tertiary centre, Feldman and Schiller found that constipation occurred in 60%, diarrhoea in 22% and faecal incontinence in 20% of their patients [1]. […] Disordered defaecation appears to be less common among patients with diabetes attending secondary referral centres [4,5], where constipation has been reported in about 20% and faecal incontinence in about 9% [5].”

“[D]efaecation and the preservation of continence are both complex territorial behaviours in humans. They are generated in the cerebral cortex and are […] markedly influenced by psychosocial factors. The multiple physiological functions required to control the passage of faeces are under the influence of a control centre in the pontine brain stem and orchestrated by the neuronal activity in the terminal expansion of the spinal cord. The instructions are conveyed via pelvic parasympathetic nerves, lumbar sympathetic nerves and sacral somatic nerves, influencing the function of the enteric nervous system and visceral smooth muscle and also the muscles of the pelvic floor. […] the muscles of the colon, abdominal wall and pelvic floor must be able to contract with sufficient power to propel faeces or resist that propulsion. But more important, the arrival of faeces in the rectum or even quite small increases in intra-abdominal pressure need to be detected immediately, so that appropriate responses can be rapidly triggered through spinal and enteric reflexes. These actions can be influenced at many levels by the diabetic process. […] Impairment of neural function caused by diabetic microangiopathy can affect to a lesser or greater extent all the mechanisms involved in the maintenance of faecal continence. So whether a person develops faecal incontinence or not depends on the interplay between all of these. Physiological studies have demonstrated that cohorts of patients with long-standing diabetes have an abnormally low anal tone, weak squeeze pressures and impaired rectal sensation [58–60]. […] Patients with long-standing diabetes mellitus are more likely to be afflicted by the shame of nocturnal incontinence of faeces than non-diabetics with faecal incontinence. […] Faecal incontinence in diabetic patients is also often associated with urinary incontinence [63]. […] Patients with faecal incontinence may only rarely be ‘cured’ — the major aim of treatment is to improve symptoms to a level where there is minimal impact on lifestyle.”

“It is important to recognise that the most common factor responsible for pudendal neuropathy in women is […] damage to the pelvic floor sustained during childbirth. […] Endo-anal ultrasonography has shown that 35% of primiparous women tested after delivery had sustained sphincter damage that persisted for at least 6 months [66]. The percentages are higher in those who had undergone forceps delivery and for multiparous women […] Diabetic women, especially those with less than optimal diabetic control, are more liable to suffer from obstetric complications, such as traumatic disruption of the anal sphincter or weakness of the pelvic floor, leading to chronic stretching of the pudendal nerve. This is because diabetics tend to give birth to large babies when glycaemic control is poor, and are more likely to experience long and difficult labours and require assisted delivery with forceps or ventouse [67].”

September 10, 2017 Posted by | Books, Diabetes, Gastroenterology, Neurology | Leave a comment

First Farmers (I?)

http://www.smbc-comics.com/comics/1504452864-20170903%20(1).png

(link)

This year I have so far read 113 books and I have added 5 of those books to my list of favourite books on goodreads. I have mentioned Herriot here on the blog despite the fact that this type of book is not really the type of book I usually blog, and I blogged Yashin et al.‘s biodemography text in a decent amount of detail. I have posted a couple of posts about Horowitz and Samsom’s book and I intend to blog that book in more detail later this week. However there are a few great non-fiction books which I’ve read this year which I have not yet blogged at all, including Boyd and Richerson and the excellent book by Bellwood to which the title of this post refers. These books have one thing in common: They are ‘paper books’, not books stored in an electronic format, which means that blogging them take more time than is ideal. The extra amount of time it takes to blog paper books makes it hard for me to justify blogging such books in general, even books I think are great.

Aside from the time expenditure there are at least two other major problems I have with justifying blogging such books. One problem is that this blog is not really the proper place for me to recommend books to others, a state of affairs of which I am well aware. I sometimes do it anyway, but I know perfectly well that very few people will ever know or care that I liked a particular book if I write about that book here. If I actually wanted others to know about books like these there would be lots of other channels of communication much better suited for such purposes; such as e.g. the comment sections of large blogs/reddit threads/etc. To a first approximation nobody reads this blog, which is the way I like it. The other major problem – in the context of me justifying to myself blogging such books – is that I actually usually spend quite a bit of effort while reading such (paper) books, e.g. in the form of note taking and highlighting. A major reason I have for blogging non-fiction books is that blogging books means that the content therein gets processed a few extra times, which helps aid recall and understanding. This incidentally goes both for the stuff that eventually finds its way into these posts, and to some extent also for the content that does not. When I’m reading paper books I tend to do a lot of this work while actively reading the books. Part of the reason why is actually precisely due to the fact that I know from experience that these kinds of books are bothersome to blog; if I know beforehand that I’m not particularly likely to blog a book I’ll usually spend a bit more time and effort while reading it. That extra amount of work of course makes me even less likely to end up blogging the book eventually; at some point diminishing marginal returns really kick in.

One take-away from all of the above is, I guess, that if you’re one of those three-four(?) people who semi-regularly read my blog and you also happen to actually care about which books I like and recommend, you should keep in mind that some of the really great books I read may end up not being covered here in ‘classical book posts’, simply because blogging great books may sometimes simply be too much work to justify the effort; and those books you may spot quite easily by having an occasional look at my book collection posts (see the sidebar) or my goodreads favourites.

What made me decide to finally write this post was that I had been considering whether or not to write a post about Tainter’s The Collapse of Complex Societies, which I didn’t really like all that much. While thinking about this stuff I realized that it would frankly be madness for me to cover that book (also a paper book) here before I’d at least talked a bit about Boyd and Richerson and Bellwood’s books, as those books are just much better and more interesting. And then I concluded that I really ought to cover Bellwood …and here we are.

I’ve read about some of the topics Bellwood covers elsewhere, e.g. here, here, and here, but the other works I’ve read on these topics have not covered the topics Bellwood covers in the amount of detail he does (if at all); one of the reasons why I really enjoyed Bellwood’s book was that it covers in a great amount of detail precisely some of the questions I’ve been asking myself while reading other works on related topics. The book covers things I had been looking for elsewhere, but hadn’t been able to find. This admittedly mainly relates to the agriculture and archaeology parts, rather than the linguistics part, but the linguistics is interesting as well.

If you’re interested in the origins of agriculture, this book is a must-read.

Below I’ve added some quotes from the book, as well as a few comments.

“This book suggests that major episodes of human movement occurred from time to time, in various parts of the world, as different populations developed or adopted agriculture and then spread farming, languages, and genes, in some cases across vast distances. […] In order to approach what often appears to be a debate in which specialists all talk past each other, concerned only with data from their own discipline, this book is framed around a fairly simple multidisciplinary hypothesis. The early farming dispersal hypothesis postulates that the spreads of early farming lifestyles were often correlated with prehistoric episodes of human population and language dispersal from agricultural homelands. The present-day distribution of language families and racially varied populations across the globe, allowing for the known reassortments that have ensued in historical times, still reflect to a high degree those early dispersals. […] [However] the early farming dispersal hypothesis is not claiming that only farmers ever dispersed into new lands or established language families in prehistory. Hunter-gatherers feature widely in this book since their lifestyle, in terms of long-term stability and reliability, has been the most successful in human history. It fueled the initial human colonization of the whole world, apart from a number of oceanic islands.”

“We have clear signs of relatively independent agricultural origins in western Asia, central China, the New Guinea highlands, Mesoamerica, the central Andes, the Mississippi basin, and possibly western Africa and southern India. These developments occurred at many different times between about 12,000 and 4,000 years ago. The agricultural systems concerned spread at remarkably different rates – some quickly, some slowly, some hardly at all.”

“This book owes its origin to a consideration of two primary observations: 1. Prior to the era of European colonization there existed (and still exist) a number of very widespread families of languages, the term “family” in this sense meaning that the languages concerned share common ancestry, having diverged from a common forebear […]. These language families exist because they have spread in some way from homeland regions, not because they have converged in place out of hundreds of formerly unrelated languages. 2. Within the early agricultural past of mankind there have existed many widespread archaeological complexes of closely linked artifactual style, shared economic basis, and relatively short-lived temporal placement. […] Again, these spreads have occurred from homeland regions, and most such complexes tend to become younger as one moves away from regions of agricultural origin […]. Most importantly, many agricultural homelands overlap geographically with major language family homelands, in highly significant ways.”

“The expansions of early farming populations that form the subject matter of this book reflect two consecutive processes: 1. the periodic genesis of new cultural (archaeological) or linguistic configurations in homeland circumstances; 2. the dispersal of such configurations into surrounding regions […] The transformations within such configurations, both during and after dispersal, can occur via adaptive or chance modifications to the inherited pattern (thus giving relationships of descent, or phylogeny), or via interactions with other contemporary human populations, including culturally and linguistically related as well as unrelated groups (thus giving rise to a process termed reticulation). […] One of the suggestions that will dominate the chapters in this book is that short bursts, or “punctuations,” of dispersal by closely related populations over very large areas have occurred from time to time in human prehistory, especially following the regional beginnings of agriculture or the acquisitions of some other material, demographic, or ideological advantages. Punctuations also occurred when humans first entered regions previously uninhabited, such as Australia, the Americas, and the Pacific Islands. These bursts have actually occupied very little of the total time span of human history. Often their effects are confusingly hidden beneath the reticulate interactive networks that have linked varied populations through the long millenia of subsequent history. But their underlying impact on the course of human history and on the generation of subsequent patterns of human diversity have been immense.”

“Many hunters and gatherers of the etnographic record have resource management skills that can mimic agriculture, and some have even adopted minor forms of casual cultivation. […] Resource management […] can be defined as any technique that propagates, tends, or protects a species, reduces competition, prolongs or increases the harvest, insures the appearance of a species at a particular time in a particular place, extends the range of or otherwise modifies the nature, distribution, and density of a species […]. Resource management is not synonymous with agriculture or cultivation and it has obviously been practiced to some degree by all plant and animal exploiters since long before agriculture began. Cultivation, an essential component of any agricultural system, defines a sequence of human activity whereby crops are planted […], protected, harvested, then deliberately sown again […] Domesticated plants […] are those that show recognizable indications of morphological change away from the wild phenotype, attributable to human interference in the genotype through cultivation […] For animals, the concept of domestication is invoked when there are relatively undisputed signs of human control and breeding of a species. Such signs can normally be claimed in situations where animals were transported out of their homeland regions […] In putative homeland areas for such animals, especially where there was exploitation of wild ancestral species in pre-agricultural times, it can often be difficult to distinguish animal husbandry from hunting in early agricultural contexts. […] the term agriculture will be used to apply in a general sense to all activities involving cultivation and domestication of plants.”

“In general, whereas a family of hunters and gatherers might need several square kilometers of territory for subsistence, an average family of shifting cultivators will be able to get by with a few hectares of crop-producing land. A family of irrigation agriculturalists will normally be able to manage with less than one hectare. Thus, along the scale of increasing intensification of production, less land is needed to feed a standard unit such as a family or individual. […] The reason why agriculturalists can live at much higher densities than hunters and collectors is because food is produced, on average, more intensively per unit of exploited area. Food-collecting mothers also tend to space births more widely than sedentary cultivators for reasons believed to relate in part to factors of mobility and diet2, leading in combination to biologically reduced frequencies of conception. This form of birth control maximizes the number of hunter-gatherer children able to survive to adulthood, but keeps the overall populations small.”

“With the Holocene amelioration of climate to conditions like those of the present, a rapid change that occurred about 11,500 years ago, the world’s climates became warmer, wetter, and a good deal more reliable on a short term basis […] It was this reliability that gave the early edge to farming […] Holocene climate was clearly the ultimate enabler of early farming, but it was not the proximate cause behind individual transitions. [The importance of climate was also emphasized in Boyd and RichersonUS.] […] A combined explanation of affluence alternating with mild environmental stress, especially in “risky” but highly productive early Holocene environments with periodic fluctuations in food supplies, is becoming widely favored by many archaeologists today as one explanation for the shift to early agriculture. […] It is necessary […] to emphasize that the regional beginnings of agriculture must have involved such a complex range of variables that we would be blind to ignore any of the above factors – prior sedentism, affluence and choice, human-plant co-evolution, environmental change and periodic stress, population pressure, and certainly the availability of suitable candidates for domestication. […] most suggested “causes” overlap so greatly that it is often hard to separate them. […] there can be no one-line explanation for the origins of agriculture.”

“[M]any recent hunter-gatherers have been observed to modify their environments to some degree to encourage food yields, whether by burning, replanting, water diversion, or keeping of decoy animals or domesticated dogs […] Most agriculturalists also hunt if the opportunity is presented and always have done so throughout the archaeological record. […] there is good evidence in recent societies for some degree of overlap between food collection and food production. But the whole issue here revolves around just what level of “food production” is implied. […] any idea that mobile hunters and gatherers can just shift in and out of agricultural (or pastoral) dependent lifestyle at will seems unrealistic in terms of the major scheduling shifts required by the annual calendars of resource availability, movement, and activity associated with the two basic modes of production. There are very few hints of such circumstances ever occurring in the ethnographic record […] Mobile foragers must give an increasing commitment to sedentism if agriculture is to become a successful mainstay of their economy […] In general for the Old World, we see that hunters and gatherers may practice a small amount of agriculture, and agriculturalists may practice a small amount of hunting and gathering, but the two modes of production most decisively do not merge or reveal a gentle cline. […] both Old and New World populations evidently found it problematic to shift in and out of agricultural dependence on a regular basis.”

“In order to approach the ethnographic record systematically and to extract useful comparative information, it is essential not to treat all recorded ethnographic hunter-gatherer societies as being one simple category, or as having had the same basic historical trajectories stretching back far into the Pleistocene past […] Hunter-gatherers have had histories just as tumultuous in many cases as have agriculturalists”.

Bellwood favours in his coverage of this topic a model with three different groups of hunter-gatherers. I’m not sure ‘favours’ is the right word; perhaps it’d be more accurate to state that he uses such a model to illustrate one of the ways in which different groups of hunters and gatherers are dissimilar, and why overlooking such dissimilarities may be problematic. In the model he presents in the book one group of hunter-gatherers consists of hunter-gatherers who live/d in close proximity to agricultural societies. These people tend to live in marginal areas where it’s hard to make agriculture work and they tend to be surrounded by agriculturalists (‘encapsulation’). Many places where you’d encounter such people, what you’d see/saw is/would be some sort of established exchange system, where farmers trade/d e.g. cereals in exchange for e.g. meat procured by the hunter-gatherers. One thing to always keep in mind here is that although long-term the hunter-gatherers were displaced and circumscribed by agricultural societies far from all interactions between these groups were hostile; mutually beneficial arrangements could be arrived at, even if they might not have been stable long-term. A related point is that hunter-gathering was probably a much more attractive option in the past than it is today, as the encapsulation process was not nearly so far advanced as it is today; they had better land, and despite not being farmers they might still benefit from the activities of some of those farmers who lived nearby. Bellwood is of course very interested in why agriculture spread originally, and he mentions in this context that although some such circumscribed hunter-gatherer societies may adopt agriculture eventually, such hunter-gatherer societies are not the place to look if you’re interested in the question of how agriculture originally spread throughout the world – which seems very reasonable to me. As he puts it in the notes, “while low-level food production can exist in theory, my feeling is that it has always been a child of marginal environments, where farmers necessarily retracted into food collection or where foragers were able to invest in minor cultivation without too much competition from other farmers. Such societies represent the ends, rather than the sources, of historical trajectories of agricultural expansion.”

The second group in Bellwood’s hunter-gatherer ‘model’ are ‘unenclosed’ hunter-gatherers. A few quotes:

“This group comprises those hunter-gatherers who inhabited agricultural latitudes in Australia, the Andaman Islands, and many regions of North America, especially in West Coast and Florida, but who (unlike the members of group 1) lived lives generally apart from farmers prior to European colonization. Many of these societies in North America lived in coastal regions with prolific maritime resources […] Some were also in periodic but non-threatening contact with farmers in adjacent regions in prehistory and thus had opportunities, never taken, to adopt agriculture […] Socially, […] such groups overlapped greatly with agriculturalists, indicating that social complexity of the chiefdom type can relate in terms of origin more to the intensity and reliability of resources and population density than to any simple presence of food production as opposed to hunting and gathering. […] The ranked and populous hunter-gatherer societies of norther California were no more interested in adopting agriculture than were Cape York Aborigines or the Semang, and perhaps even the majority of hunter-gatherers in prehistory. It does not follow that hunter-gatherers who have “complex” social institutions will necessarily become farmers whenever they are introduced to the farming concept.”

The third group in Bellwood’s model was really interesting to me, as it’s a group I’d previously wanted to read about and find quite fascinating. This is hunter-gatherers who used to be agriculturalists, i.e. former agriculturalists who later ‘reverted’ to hunter-gathering for one reason or another. A few quotes:

“Some hunter-gatherers appear to have descended from original farming or pastoralist societies, via specializations into environments where agriculture was not possible or decidedly marginal. Some also exist in direct contact with agriculturalist groups closely related in terms of cultural and biological ancestry. […] Some of the rain-forest hunters and gatherers of Island Southeast Asia […] descend from original agricultural populations, if the linguistic and biological data are any guide.16 In this view, the ancestral Punan and Kubu became hunter-gatherers, especially wild sago collectors in the case of the Punan, via conscious decisions to move into interfluvial rain-forest hunting and gathering in regions that riverine agriculturalists found hard to penetrate. Other hunter-gatherers descended from cultivators include some Bantu speakers in southern Africa, possibly the honey-collecting Dorobo or Okiek of the Kenyan Highlands of East Africa, probably [as he notes elsewhere, “this is a difficult group to deal with in terms of authentication”] some marginal sago-collecting groups […] in the Sepik basin of New Guinea, and some Indian groups such as the Chenchu and Birhor. […] the Numic-speaking Uto-Aztecan peoples of the Great Basin and adjacent areas […] appear to have abandoned a former agricultural lifestyle around 1,000 years ago. These people, linguistic descendants of original maize-cultivators in Mexico and the Southwest, eventually found themselves in a dry region where maize agriculture had become marginal or no longer possible [Joseph Tainter covers the collapse of the ‘Chacoans’ in some detail in his book – US] […] Group 3 hunter-gatherer societies are of especial interest because it is far easier for a relatively marginal food-producing community to turn to hunting and gathering than it is for hunters and gatherers to move in the opposite direction. Thus, it is a fair expectation that members of this third group of hunter-gatherers will always have been quite numerous, particularly around the ecological margins of expanding agricultural societies. […] the group 3 societies offer one trajectory of cultural evolution that can terminate for ever the idea that evolution from foraging to farming is a one-way street.”

“[I]t is certainly not being suggested here that ancient hunter-gatherers could never have adopted agriculture from outside sources. But they would only have been likely to do so in situations where they had some demographic or environmental advantage over any farmers in the vicinity, and where there would have been significant reasons why the normal hunter-gatherer disinterest in agricultural adoptions should be overturned. We cannot assume that hunter-gatherers would automatically adopt agriculture just because it was sitting under their noses. We also need to remember that many populations of hunters and gatherers survived alongside agriculturalists in many parts of the world for millenia, without adopting agriculture […] The following chapters will demonstrate that the spread of agriculture in the past could not simply have occurred only because hunter-gatherers everywhere adopted it. Agriculture spread in Neolithic/Formative circumstances mainly because the cultural and linguistic descendants of the early cultivators increased their demographic profiles and pushed their cultural and linguistic boundaries outwards.”

September 7, 2017 Posted by | Anthropology, Archaeology, Books, Language, Personal | Leave a comment

Gastrointestinal Function in Diabetes (II)

Some more observations from the book below.

“In comparison with other parts of the gastrointestinal tract, the human oesophagus is a relatively simple organ with relatively simple functions. Despite this simplicity, disordered oesophageal function is not uncommon. […] The human oesophagus is a muscular tube that connects the pharyngeal cavity to the stomach. […] The most important functions of the human oesophagus and its sphincters are to propel swallowed food boluses to the stomach and to prevent gastro-oesophageal and oesophagopharyngeal reflux. […] Whereas the passage of liquid and solid food boluses through the oesophagus, and even acid gastrooesophageal reflux, are usually not perceived, the likelihood of perception is greater under pathological circumstances […] However, the relationship between oesophageal perception and stimulation is highly variable, e.g. patients with severe oesophagitis may deny any oesophageal symptom, while others with an endoscopically normal oesophagus may suffer from severe reflux symptoms.”

“While it is clear that oesophageal dysfunction occurs frequently in diabetes mellitus, there is considerable variation in the reported prevalence between different studies. […] Numerous studies have shown that oesophageal transit, as measured with radionuclide techniques, is slower in patients with diabetes than in age- and sex-matched healthy controls […] oesophageal transit appears to be delayed in 40–60% of patients with long-standing diabetes […] Although information relating to the prevalence of manometric abnormalities of the oesophagus [relevant link] is limited, the available data indicate that these are evident in approximately 50% of patients with diabetes […] A variety of oesophageal motor abnormalities has been demonstrated in patients with diabetes mellitus […]. These include a decreased amplitude […] and number […] of peristaltic contractions […], and an increased incidence of simultaneous […] and nonpropagated [10] contractions, as well as abnormal wave forms [17,30,32]. […] there is unequivocal evidence of damage to the extrinsic nerve supply to the oesophagus in diabetes mellitus. The results of examination of the oesophagus in 20 patients who died from diabetes disclosed histologic abnormalities in 18 of them […] The available information indicates that the prevalence of gastro-oesophageal reflux disease is higher in diabetes. Murray and co-workers studied 20 diabetic patients (14 type 1, six type 2), of whom nine (45%) were found to have excessive gastro-oesophageal acid reflux […] In a larger study of 50 type 1 diabetic patients without symptoms or history of gastro-oesophageal disease, abnormal gastro-oesophageal reflux, defined as a percentage of time with esophageal pH < 4 exceeding 3.5%, was detected in 14 patients (28%) [37].”

“Several studies have shown that the gastrointestinal motor responses to various stimuli are impaired during acute hyperglycaemia in both healthy subjects and diabetic patients […] acute hyperglycaemia reduces LOS [lower oesophageal sphincter, US] pressure and impairs oesophageal motility […] Several studies have shown that abnormal oesophageal motility is more frequent in diabetic patients who have evidence of peripheral or autonomic neuropathy than in those without […] In one of the largest studies that focused on the relationship between neuropathy and disordered oesophageal function, 50 […] insulin-requiring diabetics were stratified into three groups: (a) patients without peripheral neuropathy (n = 18); (b) patients with peripheral neuropathy but no autonomic neuropathy (n = 20); and (c) patients with both peripheral and autonomic neuropathy (n = 12). Radionuclide oesophageal emptying was found to be abnormal in 55%, 70% and 83% of patients in groups A, B and C, respectively [17]. […] It must be emphasised, however, that although several studies have provided evidence for the existence of a relationship between disordered oesophageal function and diabetic autonomic neuropathy, this relationship is relatively weak [13,14,17,27,37,49].”

“There is considerable disagreement in the literature as to the prevalence of symptoms of oesophageal dysfunction in diabetes mellitus. Some publications indicate that patients with diabetes mellitus usually do not complain about oesophageal symptoms, even when severe oesophageal dysfunction is present. […] However, in other studies a high prevalence of oesophageal symptoms in diabetics has been documented. For example, 27% of 137 unselected diabetics attending an outpatient clinic admitted to having dysphagia when specifically asked […] The poor association between oesophageal dysfunction and symptoms in patients with diabetes may reflect impaired perception of oesophageal stimuli caused by neuropathic abnormalities in afferent pathways. The development of symptoms and signs of gastro-oesophageal reflux disease in diabetics may in part be counteracted by a decrease in gastric acid secretion [59]. […] [However] oesophageal acid exposure is increased in about 40% of diabetics and it is known that the absence of reflux symptoms does not exclude the presence of severe oesophagitis and/or Barrett’s metaplasia. Due to impaired oesophageal perception, the proportion of asymptomatic patients with reflux disease may be higher in the presence of diabetes than when diabetes is absent. It might, therefore, be argued that a screening upper gastrointestinal endoscopy should be performed in diabetic patients, even when no oesophageal or gastric symptoms are reported. However, [a] more cost-effective
and realistic approach may be to perform endoscopy in diabetics with other risk factors for reflux disease, in particular severe obesity.
[…] Since upper gastrointestinal symptoms correlate poorly with objective abnormalities of gastrointestinal motor function in diabetes, the symptomatic benefit that could be expected from correction of these motor abnormalities is questionable. […] Little or nothing is known about the prognosis of disordered oesophageal function in diabetes. Long-term follow-up studies are lacking.

“Abnormally delayed gastric emptying, or gastroparesis, was once considered to be a rare sequela of diabetes mellitus, occurring occasionally in patients who had long-standing diabetes complicated by symptomatic autonomic neuropathy, and inevitably associated with both intractable upper gastrointestinal symptoms and a poor prognosis [1]. Consequent upon the development of a number of techniques to quantify gastric motility […] and the rapid expansion of knowledge relating to both normal and disordered gastric motor function in humans over the last ∼ 20 years, it is now recognised that these concepts are incorrect. […] Delayed gastric emptying represents a frequent, and clinically important, complication of diabetes mellitus. […] Cross-sectional studies […] have established that gastric emptying of solid, or nutrient liquid, meals is abnormally slow in some 30–50% of outpatients with longstanding type 1 [7–20] or type 2 [20–26] diabetes […]. Early studies, using insensitive barium contrast techniques to quantify gastric emptying, clearly underestimated the prevalence substantially [1,27]. The reported prevalence of delayed gastric emptying is highest when gastric emptying of both solid and nutrient-containing liquids (or semi-solids) are measured, either simultaneously or on separate occasions [17,28,29], as there is a relatively poor correlation between gastric emptying of solids and liquids in diabetes [28–30]. […] It is now recognised that delayed gastric emptying also occurs frequently (perhaps about 30%) in children and adolescents with type 1 diabetes [37–39]. […] intragastric meal distribution is also frequently abnormal in outpatients with diabetes, with increased retention of food in both the proximal and distal stomach [31,33]. The former may potentially be important in the aetiology of gastro-oesophageal reflux [34], which appears to occur more frequently in patients with diabetes […] Diabetic gastroparesis is often associated with motor dysfunction in other areas of the gut, e.g. oesophageal transit is delayed in some 50% of patients with long-standing diabetes [8].”

“Overall patterns of gastric emptying are critically dependent on the physical and chemical composition of a meal, so that there are substantial differences between solids, semi-solids, nutrient liquids and non-nutrient liquids [70]. […] The major factor regulating gastric emptying of nutrients (liquids and ‘liquefied’ solids) is feedback inhibition, triggered by receptors that are distributed throughout the small intestine [72]; as a result of this inhibition, nutrient-containing liquids usually empty from the stomach at an overall rate of about 2 kcal/min, after an initial emptying phase that may be somewhat faster [73]. These small intestinal receptors also respond to pH, osmolality and distension, as well as nutrient content. […] While the differential emptying rates of solids, nutrient and non-nutrient liquids when ingested alone is well established, there is much less information about the interaction between different meal components. When liquids and solids are consumed together, liquids empty preferentially (∼ 80% before the solid starts to empty) […] and the presence of a solid meal results in an overall slowing of a simultaneously ingested liquid [71,75,76]. Therefore, while it is clear that the stomach can, to some extent, regulate the emptying of liquids and solids separately, the mechanisms by which this is accomplished remain poorly defined. Extracellular fat has a much lower density than water and is liquid at body temperature. The pattern of gastric emptying of fat, and its effects on emptying of other meal components are, therefore, dependent on posture — in the left lateral posture oil accumulates in the stomach and empties early, which markedly delays emptying of a nutrient liquid [77]. Gastric emptying is also influenced by patterns of previous nutrient intake. In healthy young and older subjects, supplementation of the diet with glucose is associated with acceleration of gastric emptying of glucose [78,79], while short-term starvation slows gastric emptying”.

“[I]n animal models of diabetes a number of morphological changes are evident in the autonomic nerves supplying the gut and the myenteric plexus, including a reduction in the number of myelinated axons in the vagosympathetic trunk and neurons in the dorsal root ganglia, abnormalities in neurotransmitters […] as well as a reduced number of interstitial cells of Cajal in the fundus and antrum [89–92]. In contrast, there is hitherto little evidence of a fixed pathological process in the neural tissue of humans with diabetes […] While a clear-cut association between disordered gastrointestinal function in diabetes mellitus and the presence of autonomic neuropathy remains to be established, it is now recognised that acute changes in the blood glucose concentration have a substantial, and reversible, effect on gastric (as well as oesophageal, intestinal, gallbladder and anorectal) motility, in both healthy subjects and patients with diabetes […] Marked hyperglycaemia (blood glucose concentration ∼ 15 mmol/l) affects motility in every region of the gastrointestinal tract [103]. […] In healthy subjects [114] and patients with uncomplicated type 1 diabetes […] gastric emptying is accelerated markedly during hypoglycaemia […] this response is likely to be important in the counterregulation of hypoglycaemia. It is not known whether the magnitude of the effect of hypoglycaemia on gastric emptying is influenced by gastroparesis and/or autonomic neuropathy. Recent studies have established that changes in the blood glucose concentration within the normal postprandial range also influence gastric emptying and motility [104–106]; emptying of solids and nutrient-containing liquids is slower at a blood glucose of 8 mmol/l than at 4 mmol/l in both healthy subjects and patients with type 1 diabetes […] Recent studies suggest that the rate of gastric emptying is a significant factor in postprandial hypotension. The latter, which may lead to syncope and falls, is an important clinical problem, particularly in the elderly and patients with autonomic dysfunction (usually diabetes mellitus), occurring more frequently than orthostatic hypotension [154].”

“Gastric emptying is potentially an important determinant of oral drug absorption; most orally administered drugs (including alcohol) are absorbed more slowly from the stomach than from the small intestine because the latter has a much greater surface area [179,180]. Thus, delayed gastric emptying (particularly that of tablets or capsules, which are not degraded easily in the stomach) and a reduction in antral phase 3 activity, may potentially lead to fluctuations in the serum concentrations of orally administered drugs. This may be particularly important when a rapid onset of drug effect is desirable, as with some oral hypoglycaemic drugs […]. There is relatively little information about drug absorption in patients with diabetic gastroparesis [179] and additional studies are required.”

“Glycated haemoglobin is influenced by both fasting and postprandial glucose levels; while their relative contributions have not been defined precisely [181], it is clear that improved overall glycaemic control, as assessed by glycated haemoglobin, can be achieved by lowering postprandial blood glucose concentrations, even at the expense of higher fasting glucose levels [182]. Accordingly, the control of postprandial blood glucose levels, as opposed to glycated haemoglobin, now represents a specific target for treatment […] It remains to be established whether postprandial glycaemia per se, including the magnitude of postprandial hyperglycaemic spikes, has a distinct role in the pathogenesis of diabetic complications, but there is increasing data to support this concept [181,183,184]. It is also possible that the extent of blood glucose fluctuations is an independent determinant of the risk for long-term diabetic complications [184]. […] postprandial blood glucose levels are potentially determined by a number of factors, including preprandial glucose concentrations, the glucose content of a meal, small intestinal delivery and absorption of nutrients, insulin secretion, hepatic glucose metabolism and peripheral insulin sensitivity. Although the relative contribution of these factors remains controversial, and is likely to vary with time after a meal, it is now recognised that gastric emptying accounts for at least 35% of the variance in peak glucose levels after oral glucose (75 g) in both healthy individuals and patients with type 2 diabetes […] It is also clear that even modest perturbations in gastric emptying of carbohydrate have a major effect on postprandial glycaemia [76,79]. […] it appears that much of the observed variation in the glycaemic response to different food types (‘glycaemic indices’) in both normal subjects and patients with diabetes is attributable to differences in rates of gastric emptying [103]. […] In type 1 patients with gastroparesis […] less insulin is initially required to maintain euglycaemia after a meal when compared to those with normal gastric emptying [187]. […] There are numerous uncontrolled reports supporting the concept […] that in type 1 patients gastroparesis is a risk factor for poor glycaemic control.”

“The potential for the modulation of gastric emptying, by dietary or pharmacological means, to minimise postprandial glucose excursions and optimise glycaemic control, represents a novel approach to the optimisation of glycaemic control in diabetes, which is now being explored actively. It is important to appreciate that the underlying strategies are likely to differ fundamentally between type 1 and type 2 diabetes. In type 1 diabetes, interventions that improve the coordination between nutrient absorption and the action of exogenous insulin are likely to be beneficial, even in those patients who have delayed gastric emptying, i.e. by accelerating or even slowing gastric emptying, so that the rate of nutrient delivery (and hence absorption) is more predictable. In contrast, in type 2 diabetes, it may be anticipated that slowing of the absorption of nutrients would be desirable […] In the treatment of type 2 diabetes mellitus, dietary modifications potentially represent a more attractive and cost-effective approach than drugs […] A number of dietary strategies may slow carbohydrate absorption […] an increase in dietary fibre […] Fat is a potent inhibitor of gastric emptying and […] these effects may be dependent on posture [77]; there is the potential for relatively small quantities of fat given immediately before consumption of, or with, a meal to slow gastric emptying of other meal components, so that the postprandial rise in blood glucose is minimised [210] (this is analogous to the slowing of alcohol absorption and liquid gastric emptying when alcohol is ingested after a solid meal, rather than in the fasted state [75]). […] there is evidence that the suppression of subsequent food intake by the addition of fat to a meal may exceed the caloric value of the fat load [212]. In the broadest sense, the glycaemic response to a meal is also likely to be critically dependent on whether food from the previous meal is still present in the stomach and/or small intestine at the time of its ingestion, so that glucose tolerance may be expected to be worse in the fasted state […] than after a meal.”

“At present it is not known whether normalisation of gastric emptying in either type 1 or type 2 patients with gastroparesis improves glycaemic control. […] prokinetic drugs would not be expected to have a beneficial effect on glycaemic control in type 2 patients who are not using insulin. Erythromycin may, however, as a result of its interaction with motilin receptors, also stimulate insulin secretion (and potentially improve glycaemic control by this mechanism) in type 2 diabetes [220] […] It should […] be recognised that any drug that slows gastric emptying has the potential to induce or exacerbate upper gastrointestinal symptoms, delay oral drug absorbtion and impair the counter-regulation of glycaemia. […] At present, the use of prokinetic drugs (mainly cisapride, domperidone, metoclopramide and erythromycin) forms the mainstay of therapy [167,244–259], and most patients will require drug treatment. In general, these drugs all result in dose-related improvements in gastric emptying after acute administration […] The response to prokinetic therapy (magnitude of acceleration in gastric emptying) tends to be greater when gastric emptying is more delayed. It should be recognised that relatively few controlled studies have evaluated the effects of ‘prolonged’ (> 8 weeks) prokinetic therapy, that in many studies the sample sizes have been small, and that the assessments of gastrointestinal symptoms have, not infrequently, been suboptimal; furthermore, the results of some of these studies have been negative [32]. There have hitherto been relatively few randomised controlled trials of high quality, and those that are available differ substantially in design. […] In general, there is a poor correlation between effects on symptoms and gastric emptying — prokinetic drugs may improve symptoms by effects unrelated to acceleration of gastric emptying or central anti-emetic properties [254].”

“Autoimmune factors are well recognised to play a role in the aetiology of type 1 diabetes [316,317]. In such patients there is an increased prevalence of autoimmune aggression against non-endocrine tissues, including the gastric mucosa. The reported prevalence of parietal cell antibodies in patients with type 1 diabetes is in the range 5–28%, compared to 1.4–12% in non-diabetic controls […] The autoimmune response to parietal cell antibodies may lead to atrophic gastritis, pernicious anaemia and iron deficiency anaemia […] Parietal cell antibodies can inhibit the secretion of intrinsic factor, which is necessary for the absorption of vitamin B12, potentially resulting in pernicious anaemia. The prevalence of latent and overt pernicious anaemia in type 1 diabetes has been reported to be 1.6–4% and 0.4%, respectively […] screening for parietal cell antibodies in patients with type 1 diabetes currently appears inappropriate. However, there should be a low threshold for further investigation in those patients presenting with anaemia”.

September 1, 2017 Posted by | Books, Diabetes, Gastroenterology, Immunology, Medicine, Neurology | Leave a comment

Light

I gave the book two stars. Some quotes and links below.

“Lenses are ubiquitous in image-forming devices […] Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. […] In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its ‘ray-bending’ power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope. Because the refractive index of the lens material usually depends on the colour of light, rays of different colours are bent by different amounts at the surface, leading to a focus for each colour occurring in a different position. […] lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. […] about the best you can do in any lens system you could actually make is an image size of approximately one wavelength. This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars. […] Much more sophisticated methods are required to see even smaller things. The reason is that the wave nature of light puts a lower limit on the size of a spot of light. […] At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components […]. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror”.

“[W]hat sort of wave is light? This was […] answered in the 19th century by James Clerk Maxwell, who showed that it is an oscillation of a new kind of entity: the electromagnetic field. This field is effectively a force that acts on electric charges and magnetic materials. […] In the early 19th century, Michael Faraday had shown the close connections between electric and magnetic fields. Maxwell brought them together, as the electromagnetic force field. […] in the wave model, light can be considered as very high frequency oscillations of the electromagnetic field. One consequence of this idea is that moving electric charges can generate light waves. […] When […] charges accelerate — that is, when they change their speed or their direction of motion — then a simple law of physics is that they emit light. Understanding this was one of the great achievements of the theory of electromagnetism.”

“It was the observation of interference effects in a famous experiment by Thomas Young in 1803 that really put the wave picture of light as the leading candidate as an explanation of the nature of light. […] It is interference of light waves that causes the colours in a thin film of oil floating on water. Interference transforms very small distances, on the order of the wavelength of light, into very big changes in light intensity — from no light to four times as bright as the individual constituent waves. Such changes in intensity are easy to detect or see, and thus interference is a very good way to measure small changes in displacement on the scale of the wavelength of light. Many optical sensors are based on interference effects.”

“[L]ight beams […] gradually diverge as they propagate. This is because a beam of light, which by definition has a limited spatial extent, must be made up of waves that propagate in more than one direction. […] This phenomenon is called diffraction. […] if you want to transmit light over long distances, then diffraction could be a problem. It will cause the energy in the light beam to spread out, so that you would need a bigger and bigger optical system and detector to capture all of it. This is important for telecommunications, since nearly all of the information transmitted over long-distance communications links is encoded on to light beams. […] The means to manage diffraction so that long-distance communication is possible is to use wave guides, such as optical fibres.”

“[O]ptical waves […] guided along a fibre or in a glass ‘chip’ […] underpins the long-distance telecommunications infrastructure that connects people across different continents and powers the Internet. The reason it is so effective is that light-based communications have much more capacity for carrying information than do electrical wires, or even microwave cellular networks. […] In optical communications, […] bits are represented by the intensity of the light beam — typically low intensity is a 0 and higher intensity a 1. The more of these that arrive per second, the faster the communication rate. […] Why is optics so good for communications? There are two reasons. First, light beams don’t easily influence each other, so that a single fibre can support many light pulses (usually of different colours) simultaneously without the messages getting scrambled up. The reason for this is that the glass of which the fibre is made does not absorb light (or only absorbs it in tiny amounts), and so does not heat up and disrupt other pulse trains. […] the ‘crosstalk’ between light beams is very weak in most materials, so that many beams can be present at once without causing a degradation of the signal. This is very different from electrons moving down a copper wire, which is the usual way in which local ‘wired’ communications links function. Electrons tend to heat up the wire, dissipating their energy. This makes the signals harder to receive, and thus the number of different signal channels has to be kept small enough to avoid this problem. Second, light waves oscillate at very high frequencies, and this allows very short pulses to be generated This means that the pulses can be spaced very close together in time, making the transmission of more bits of information per second possible. […] Fibre-based optical networks can also support a very wide range of colours of light.”

“Waves can be defined by their wavelength, amplitude, and phase […]. Particles are defined by their position and direction of travel […], and a collection of particles by their density […] and range of directions. The media in which the light moves are characterized by their refractive indices. This can vary across space. […] Hamilton showed that what was important was how rapidly the refractive index changed in space compared with the length of an optical wave. That is, if the changes in index took place on a scale of close to a wavelength, then the wave character of light was evident. If it varied more smoothly and very slowly in space then the particle picture provided an adequate description. He showed how the simpler ray picture emerges from the more complex wave picture in certain commonly encountered situations. The appearance of wave-like phenomena, such as diffraction and interference, occurs when the size scales of the wavelength of light and the structures in which it propagates are similar. […] Particle-like behaviour — motion along a well-defined trajectory — is sufficient to describe the situation when all objects are much bigger than the wavelength of light, and have no sharp edges.”

“When things are heated up, they change colour. Take a lump of metal. As it gets hotter and hotter it first glows red, then orange, and then white. Why does this happen? This question stumped many of the great scientists [in the 19th century], including Maxwell himself. The problem was that Maxwell’s theory of light, when applied to this problem, indicated that the colour should get bluer and bluer as the temperature increased, without a limit, eventually moving out of the range of human vision into the ultraviolet—beyond blue—region of the spectrum. But this does not happen in practice. […] Max Planck […] came up with an idea to explain the spectrum emitted by hot objects — so-called ‘black bodies’. He conjectured that when light and matter interact, they do so only by exchanging discrete ‘packets’, or quanta, or energy. […] this conjecture was set to radically change physics.”

“What Dirac did was to develop a quantum mechanical version of Maxwell’s theory of electromagnetic fields. […] It set the quantum field up as the fundamental entity on which the universe is built — neither particle nor wave, but both at once; complete wave–particle duality. It is a beautiful reconciliation of all the phenomena that light exhibits, and provides a framework in which to understand all optical effects, both those from the classical world of Newton, Maxwell, and Hamilton and those of the quantum world of Planck, Einstein, and Bohr. […] Light acts as a particle of more or less well-defined energy when it interacts with matter. Yet it retains its ability to exhibit wave-like phenomena at the same time. The resolution [was] a new concept: the quantum field. Light particles — photons — are excitations of this field, which propagates according to quantum versions of Maxwell’s equations for light waves. Quantum fields, of which light is perhaps the simplest example, are now regarded as being the fundamental entities of the universe, underpinning all types of material and non-material things. The only explanation is that the stuff of the world is neither particle nor wave but both. This is the nature of reality.”

Some links:

Light.
Optics.
Watt.
Irradiance.
Coherence (physics).
Electromagnetic spectrum.
Joseph von Fraunhofer.
Spectroscopy.
Wave.
Transverse wave.
Wavelength.
Spatial frequency.
Polarization (waves).
Specular reflection.
Negative-index metamaterial.
Birefringence.
Interference (wave propagation).
Diffraction.
Young’s interference experiment.
Holography.
Photoactivated localization microscopy.
Stimulated emission depletion (STED) microscopy.
Fourier’s theorem (I found it hard to find a good source on this one. According to the book, “Fourier’s theorem says in simple terms that the smaller you focus light, the broader the range of wave directions you need to achieve this spot”)
X-ray diffraction.
Brewster’s angle.
Liquid crystal.
Liquid crystal display.
Wave–particle duality.
Fermat’s principle.
Wavefront.
Maupertuis’ principle.
Johann Jakob Balmer.
Max Planck.
Photoelectric effect.
Niels Bohr.
Matter wave.
Quantum vacuum.
Lamb shift.
Light-emitting diode.
Fluorescent tube.
Synchrotron radiation.
Quantum state.
Quantum fluctuation.
Spontaneous emission/stimulated emission.
Photodetector.
Laser.
Optical cavity.
X-ray absorption spectroscopy.
Diamond Light Source.
Mode-locking.
Stroboscope.
Femtochemistry.
Spacetime.
Atomic clock.
Time dilation.
High harmonic generation.
Frequency comb.
Optical tweezers.
Bose–Einstein condensate.
Pump probe spectroscopy.
Vulcan laser.
Plasma (physics).
Nonclassical light.
Photon polarization.
Quantum entanglement.
Bell test experiments.
Quantum key distribution/Quantum cryptography/Quantum computing.

August 31, 2017 Posted by | Books, Chemistry, Computer science, Physics | Leave a comment

Gastrointestinal Function in Diabetes (I)

“During the last 15–20 years, primarily as a result of the application of novel investigative techniques, there has been a rapid expansion of knowledge relating to the function of the gastrointestinal tract in diabetes mellitus. These insights have been substantial and have led to the recognition that gastrointestinal function represents a hitherto inappropriately neglected, as well as important, aspect of diabetes management. In particular, disordered gastrointestinal motor and sensory function occur frequently in both type 1 and type 2 diabetes and may be associated with significant clinical sequelae. Recent epidemiological studies have established that there is a high prevalence of gastrointestinal symptoms in the diabetic population and that these are associated with impaired quality of life. Furthermore, upper gastrointestinal motility, even when normal, is central to the regulation of postprandial blood glucose concentrations. Hence, diabetes and the gastrointestinal tract are inextricably linked. […] This book, which to our knowledge represents the first of its kind, was stimulated by the need to consolidate these advances, to illuminate an area that is perceived as increasingly important, but somewhat difficult to understand. […] The book aims to be comprehensive and to present the relevant information in context for both the clinician and clinical researcher. There are nine chapters: five are organ-specific, relating to oesophageal, gastric, intestinal, anorectal and hepatobiliary function; the four other chapters address epidemiological aspects of gastrointestinal function in diabetes, the effects of diabetes mellitus on gastrointestinal function in animal models, the impact of gastrointestinal function on glycaemic control, and the evaluation of gastrointestinal autonomic function. All of the authors are recognised internationally for their expertise in the field”.

I added this book to my list of favourite books on goodreads – it’s a great book, from which I learned a lot.

I have added some more quotes and observations from the book below, as well as a few comments.

“Population-based studies of gastrointestinal symptoms in diabetic patients have been relatively few and the results conflicting […] To date, a total of nine population-based studies have been undertaken evaluating gastrointestinal symptoms in subjects with diabetes mellitus […] Depending on the population studied, the prevalence of symptoms has varied considerably in patients with both type 1 and type 2 diabetes mellitus. […] there is evidence that gastrointestinal symptoms are linked with diabetes mellitus, but the prevalence over and above the general population is at most only modestly increased. Some studies have failed to detect an association between diabetes and gastrointestinal symptoms, but several confounders may have obscured the findings. For example, it is well documented that chronic gastrointestinal symptoms are common in non-diabetics in the community, presumably due to functional gastrointestinal disorders such as the irritable bowel syndrome [33,34]. Moreover, the presence of diabetic complications and possibly long-term glycaemic control appear to be important factors in symptom onset [31,32]. This may explain the difficulty in establishing a firm link between diabetes and chronic gastrointestinal complaints in population-based studies.”

It is perhaps important to interpose already at this early stage of the coverage that diabetes seems to be related to many changes in gastrointestinal function that do not necessarily cause symptoms which lead to patient complaints, but which even so may still affect individuals with the disease in a variety of ways. For example drug metabolism may be altered in diabetics secondary to hyperglycemia-induced delayed gastric emptying, which can naturally be very important in some situations (drugs don’t work, or don’t work when they’re supposed to). Symptomatic disease is important to observe and address, but there are many other aspects that may be relevant as well. The symptomatology of diabetes-related gastrointestinal changes is of course complicated by the fact that nervous system involvement is an important player, and a player we know from other contexts may both generate symptoms (in this setting you’d e.g. think of altered peristalsis in severe neuropathy, causing constipation) and may also lead to an absence of symptoms in settings where symptoms would otherwise have been present (‘silent ischemia‘ is common in diabetics). I may or may not go much more into these topics, there’s a lot of interesting stuff in this book.

“In patients with long-standing type 1 and type 2 diabetes, the prevalence of delayed gastric emptying of a nutrient meal is reported to range from 27% to 40% [40–42] and the prevalence is similar in insulin-dependent and non-insulindependent diabetes mellitus […]. In a minority of patients (less than 10%) with long-standing diabetes, gastric emptying is accelerated [42–44]. […] A number of studies have shown that acute changes in blood glucose concentrations can have a profound effect on motor function throughout the gastrointestinal tract in both normal subjects and patients with diabetes mellitus [54]. Recent studies have demonstrated that the blood glucose concentration may also modulate the perception of sensations arising from the gastrointestinal tract [56–58]. However, there is relatively little information about the mechanisms mediating the effects of the blood glucose concentration on gastrointestinal motility. While some studies have implicated impaired glycaemic control in the genesis of chronic gastrointestinal symptoms [24,31], this remains controversial.”

“As part of the Medical Outcomes Study, that determined the impact of nine different chronic illnesses upon HRQL [Health-Related Quality of Life, US], Stewart et al. [90] used the Short Form (SF-20) of the General Health Survey to evaluate HRQL ratings in 9385 patients, 844 of whom had diabetes […] gastrointestinal disorders had a more negative impact on HRQL than all other conditions with the exception of heart disease [90]. Others have reported similar findings [120,121]. […] A study of diabetic patients undergoing transplantation [122] indicated that, of all the factors likely to compromise HRQL, the single most important one was gastrointestinal dysfunction.”

“In animal studies of gastrointestinal function in diabetes mellitus, most information has been generated using insulinopenic rats with severe hyperglycaemia; around one-third of the literature has been generated using BB rats (autoimmune spontaneous diabetic) and two-thirds using streptozotocin (STZ; chemically-induced) diabetic models. In the choice of these animal models, an assumption appears to have been often made that hyperglycaemia per se, or at least some aspect of the metabolic disturbance secondary to insulin lack, is the aetiopathologic insult. A common hypothesis is that neurotoxicity of the autonomic nervous system, secondary to this metabolic insult, is responsible for the gastrointestinal effects of diabetes. This hypothesis is described here as the ‘autonomic neuropathic’ hypothesis.”

“Central nervous structures, especially those in the brain stem […] are implicated in the normal autonomic control of gastrointestinal function […] over two-thirds of the literature regarding gastrointestinal dysfunction in diabetes is derived from chemically-induced models in which, alarmingly, much of the reported gut dysfunction could be an artifact of selective damage to central structures. It is now recognised that there are major differences in gastrointestinal function between animals in which β-cell damage was caused by chemical means and those in which damage was a result of an autoimmune process. These differences prompt an examination of the extent to which gastrointestinal dysfunction in some models is a consequence of diabetes per se, perhaps applicable to human disease, as opposed to being a consequence of damage to specific central structures.”

“The […] most accepted hypothesis in the past to explain gastrointestinal dysfunction in diabetes has been the proposal that autonomic neuropathy has disturbed the normal regulation of gut function. But there are recently identified disturbances in several of the neurohormones found in gut in different diabetic states. Several of these, including amylin, GLP-1 and PYY have effects on gut function, and should now be considered in explanations of diabetes-associated changes in gut function. […] A ‘neurocrine’ alternative to the neuropathic hypothesis focuses on the possibility that absolute or relative deficiency of the pancreatic β-cell hormone, amylin, may be of importance in the aetiology of disordered gastrointestinal function in diabetes. […] STZ diabetic rats most often show increased gastric acid secretion [63,64] and increased rates of ulceration [65–71]. This effect is exacerbated by fasting [67] and is reversed by hyperglycaemia [72] but not by insulin replacement [73]. It thus appears that insulin lack is not the ulcerogenic stimulus, and raises the possibility that absence of gastric-inhibitory factors (e.g. amylin, PYY, GLP-1), which may be absent or reduced in diabetes, could be implicated. […] autoimmune type 1 diabetic BB rats [76] and autoimmune non-obese diabetic (NOD) mice [77] in which the gastric mucosa is not an immune target, also show a marked increase in gastric erosions. The constancy of findings of acid hypersecretion and ulceration in insulinopenic diabetes invoked by diverse insults (chemical and autoimmune) indicates that this gastrointestinal disturbance is a direct consequence of the diabetes, and perhaps of β-cell deficiency. […] Amylin […] is a potent inhibitor of gastric acid secretion [88], independent of changes in plasma glucose [89] and prevents gastric erosion in response to a number of irritants [90–92]. These effects appear to be specific to amylin […] It is possible that amylin deficiency could be implicated in a propensity to ulceration in some forms of diabetes. It is unclear whether such a propensity exists in type 1 diabetic adults. However, type 1 diabetic children are reported to have a three- to four-fold elevation in rate of peptic disease [93].”

“Changes in intestinal mucosal function are observed in diabetic rodents, but it is unclear whether these are intrinsic and contributory to the disease process, or are secondary to the disease. […] It […] appears likely […] that diabetes-associated changes in gut enzyme expression represent a response to some aspect of the diabetic state, since they occur in both chemically-induced and genetic models, and are reversible with vigorous treatment of the diabetes. […] While there appear to be no reports that quantify the relationship between acid secretion and rates of nutrient assimilation, there is evidence that type 1 diabetes, in animal models at least, is characterised by disturbed acid regulation.”

“[D]isordered gastrointestinal motility has long been recognised as a frequent feature in diabetic patients who also exhibit neuropathy [125]. Disturbances in gastrointestinal function have been estimated by some to have a prevalence of ∼ 30% (range 5–60% [126–128]). Both peripheral and autonomic [126–128] neuropathy are frequent complications of diabetes mellitus. Since the autonomic nervous system (ANS) plays a prominent role in the regulation of gut motility, a prevailing hypothesis has been that autonomic neuropathic dysfunction could account for much of this disturbance. […] Motor disturbances associated with autonomic neuropathy include dilation of the oesophagus, gastrointestinal stasis, accumulation of digesta and constipation, mainly signs associated with vagal (parasympathetic) dysfunction. There are also reports of faecal incontinence, related to decreased sphincter pressure, and diarrhoea.”

“The best-characterised signs of damage to the autonomic nervous system during diabetes are morphological […] For example, the number of myelinated axons in the vagosympathetic trunk is decreased in diabetic rats [131], as is the number of neurones in dorsal root ganglia and peripheral postganglionic sympathetic nerves. […] In addition to alterations in numbers and morphology of axons, the tissue around the axons is also often disturbed. […] It is of interest that autonomic neuropathy can be prevented or partially reversed by rigorous glycaemic control [137], suggesting that hyperglycaemia per se is of major aetiological importance in autonomic neuropathy. […] Morphological evidence of neuropathy in BB rats includes axonal degeneration, irregularity of myelin sheaths and Mullerian degeneration […] It has been proposed that periodic hypoglycaemia in BB rats may induce Wallerian degeneration and reduced conduction velocity […] while abnormalities associated with chronic hyperglycaemia include sensory (afferent) axonopathy […] The secretion of a number of neuroendocrine substances may be decreased in diabetes. Glucagon, pancreatic polypeptide, gastrin, somatostatin and gastric inhibitory peptide levels are reportedly reduced in the gastrointestinal tract of diabetic patients […] In addition to peripheral autonomic neuropathy, neurons within the central nervous system are also reported to be damaged in animal models of diabetes, including areas […] which are important in controlling those parts of the autonomic nervous system that innervate the gut.”

“Despite ample evidence of morphologic and functional changes in nerves of rodent models of type 1 diabetes mellitus, it is not clear to what extent these changes underly the gastrointestinal dysfunction evident in these animals. Coincidence of neuropathic and gastrointestinal changes does not necessarily prove a causal association between autonomic neuropathy and gastrointestinal dysfunction in diabetes. […] recently recognised neuroendocrine disturbances in diabetes, especially of the β-cell hormone amylin, provide an alternative to the neuropathic hypothesis […] In considering primary endocrine changes associated with type 1 diabetes mellitus, it should be recognised that the central pathogenic event is a selective and near-absolute autoimmune destruction of pancreatic β-cells. Other cell types in the islets, and other tissues, are preserved. The only confirmed hormones currently known to be specific to pancreatic β-cells are insulin and amylin [251]. Recent evidence also suggests that C-peptide, cleaved from proinsulin during intracellular processing and co-secreted with insulin, may also be biologically active [252] […] It is therefore only insulin, C-peptide and amylin that disappear following the selective destruction of β-cells. The implications of this statement are profound; all diabetes-associated sequelae are somehow related to the absence of these (and/or other possibly undiscovered) hormones, whether directly or indirectly […]. Since insulin has minimal direct effect on gut function, until recently the most plausible explanation linking β-cell destruction to changes in gastrointestinal functions was a neuropathic effect secondary to hyperglycaemia. With the recent discovery of multiple physiological gastrointestinal effects of the second β-cell hormone, amylin [255], a plausible alternate explanation of gut dysfunction following β-cell loss has emerged. That is, instead of being due to insulin lack, some gut dysfunction in insulinopenic diabetes may instead be due to the loss of its co-secreted partner, amylin. […] While insulin and amylin are essentially absent in type 1 diabetes, in states of impaired glucose tolerance and early type 2 diabetes, each of these hormones may in fact be hypersecreted […] The ZDF rat is a model of insulin resistance, with some strains developing type 2 diabetes. These animals, which hypersecrete from pancreatic β-cells, exhibit both hyperinsulinaemia and hyperamylinaemia.”

If amylin is hypersecreted in type 2 diabetics and the hormone is absent in type 1 and you do population studies on mixed populations of type 1 and type 2 patients and try to figure out what is going on, you’re going to have some potential issues. The picture seems not too dissimilar to what you see when you look at bone disease in diabetes; type 1s have a high fracture risk, type 2s also have a higher than normal fracture risk, but ‘the effect of diabetes’ is in fact very different in the two groups (in part – but certainly not only – because most type 2s are overweight or obese, and overweight decreases the fracture risk). Some of the relevant pathways of pathophysiological interest are identical in the two patient populations (this is also the case here; acute hyperglycemia is known to cause delayed gastric emptying even in non-diabetics), some are completely different – it’s a mess. This is one reason why I don’t think the confusing results of some of the population studies included early in the book’s coverage – which I decided not to cover in detail here – are necessarily all that surprising.

“Many gastrointestinal reflexes are glucose-sensitive, reflecting their often unrecognised glucoregulatory (restricting elevations of glucose during hyperglycaemia) and counter-regulatory functions (promoting elevation of glucose during hypoglycaemia). Glucose-sensitive effects include inhibition of food intake, control of gastric emptying rate, and regulation of gastric acid secretion and pancreatic enzyme secretion […] Some gastrointestinal manifestations of diabetes may therefore be secondary, and compensatory, to markedly disturbed plasma glucose concentrations. […] It has emerged in recent years that several of the most potent of nearly 60 reported biological actions of amylin [286] are gastrointestinal effects that appear to collectively restrict nutrient influx and promote glucose tolerance. These include inhibition of gastric emptying, inhibition of food intake, inhibition of digestive functions (pancreatic enzyme secretion, gastric acid secretion and bile ejection), and inhibition of nutrient-stimulated glucagon secretion. […] In rats, amylin is the most potent of any known mammalian peptide in slowing gastric emptying […] An amylin agonist (pramlintide), several GLP-1 agonists and exendin-4 are being explored as potential therapies for the treatment of diabetes, with inhibition of gastric emptying being recognised as a mode of therapeutic action. […] The concept of the gut as an organ of metabolic control is yet to be widely accepted, and antidiabetic drugs that moderate nutrient uptake as a mode of therapy have only begun to emerge. A potential advantage such therapies hold over those that enhance insulin action, is their general glucose dependence and low propensity to (per se) induce hypoglycaemia.”

August 29, 2017 Posted by | Books, Diabetes, Gastroenterology, Medicine, Neurology | Leave a comment

Magnetism

This book was ‘okay…ish’, but I must admit I was a bit disappointed; the coverage was much too superficial, and I’m reasonably sure the lack of formalism made the coverage harder for me to follow than it could have been. I gave the book two stars on goodreads.

Some quotes and links below.

Quotes:

“In the 19th century, the principles were established on which the modern electromagnetic world could be built. The electrical turbine is the industrialized embodiment of Faraday’s idea of producing electricity by rotating magnets. The turbine can be driven by the wind or by falling water in hydroelectric power stations; it can be powered by steam which is itself produced by boiling water using the heat produced from nuclear fission or burning coal or gas. Whatever the method, rotating magnets inducing currents feed the appetite of the world’s cities for electricity, lighting our streets, powering our televisions and computers, and providing us with an abundant source of energy. […] rotating magnets are the engine of the modern world. […] Modern society is built on the widespread availability of cheap electrical power, and almost all of it comes from magnets whirling around in turbines, producing electric current by the laws discovered by Oersted, Ampère, and Faraday.”

“Maxwell was the first person to really understand that a beam of light consists of electric and magnetic oscillations propagating together. The electric oscillation is in one plane, at right angles to the magnetic oscillation. Both of them are in directions at right angles to the direction of propagation. […] The oscillations of electricity and magnetism in a beam of light are governed by Maxwell’s four beautiful equations […] Above all, Einstein’s work on relativity was motivated by a desire to preserve the integrity of Maxwell’s equations at all costs. The problem was this: Maxwell had derived a beautiful expression for the speed of light, but the speed of light with respect to whom? […] Einstein deduced that the way to fix this would be to say that all observers will measure the speed of any beam of light to be the same. […] Einstein showed that magnetism is a purely relativistic effect, something that wouldn’t even be there without relativity. Magnetism is an example of relativity in everyday life. […] Magnetic fields are what electric fields look like when you are moving with respect to the charges that ‘cause’ them. […] every time a magnetic field appears in nature, it is because a charge is moving with respect to the observer. Charge flows down a wire to make an electric current and this produces magnetic field. Electrons orbit an atom and this ‘orbital’ motion produces a magnetic field. […] the magnetism of the Earth is due to electrical currents deep inside the planet. Motion is the key in each and every case, and magnetic fields are the evidence that charge is on the move. […] Einstein’s theory of relativity casts magnetism in a new light. Magnetic fields are a relativistic correction which you observe when charges move relative to you.”

“[T]he Bohr–van Leeuwen theorem […] states that if you assume nothing more than classical physics, and then go on to model a material as a system of electrical charges, then you can show that the system can have no net magnetization; in other words, it will not be magnetic. Simply put, there are no lodestones in a purely classical Universe. This should have been a revolutionary and astonishing result, but it wasn’t, principally because it came about 20 years too late to knock everyone’s socks off. By 1921, the initial premise of the Bohr–van Leeuwen theorem, the correctness of classical physics, was known to be wrong […] But when you think about it now, the Bohr–van Leeuwen theorem gives an extraordinary demonstration of the failure of classical physics. Just by sticking a magnet to the door of your refrigerator, you have demonstrated that the Universe is not governed by classical physics.”

“[M]ost real substances are weakly diamagnetic, meaning that when placed in a magnetic field they become weakly magnetic in the opposite direction to the field. Water does this, and since animals are mostly water, it applies to them. This is the basis of Andre Geim’s levitating frog experiment: a live frog is placed in a strong magnetic field and because of its diamagnetism it becomes weakly magnetic. In the experiment, a non-uniformity of the magnetic field induces a force on the frog’s induced magnetism and, hey presto, the frog levitates in mid-air.”

“In a conventional hard disk technology, the disk needs to be spun very fast, around 7,000 revolutions per minute. […] The read head floats on a cushion of air about 15 nanometres […] above the surface of the rotating disk, reading bits off the disk at tens of megabytes per second. This is an extraordinary engineering achievement when you think about it. If you were to scale up a hard disk so that the disk is a few kilometres in diameter rather a few centimetres, then the read head would be around the size of the White House and would be floating over the surface of the disk on a cushion of air one millimetre thick (the diameter of the head of a pin) while the disk rotated below it at a speed of several million miles per hour (fast enough to go round the equator a couple of dozen times in a second). On this scale, the bits would be spaced a few centimetres apart around each track. Hard disk drives are remarkable. […] Although hard disks store an astonishing amount of information and are cheap to manufacture, they are not fast information retrieval systems. To access a particular piece of information involves moving the head and rotating the disk to a particular spot, taking perhaps a few milliseconds. This sounds quite rapid, but with processors buzzing away and performing operations every nanosecond or so, a few milliseconds is glacial in comparison. For this reason, modern computers often use solid state memory to store temporary information, reserving the hard disk for longer-term bulk storage. However, there is a trade-off between cost and performance.”

“In general, there is a strong economic drive to store more and more information in a smaller and smaller space, and hence a need to find a way to make smaller and smaller bits. […] [However] greater miniturization comes at a price. The point is the following: when you try to store a bit of information in a magnetic medium, an important constraint on the usefulness of the technology is how long the information will last for. Almost always the information is being stored at room temperature and so needs to be robust to the ever present random jiggling effects produced by temperature […] It turns out that the crucial parameter controlling this robustness is the ratio of the energy needed to reverse the bit of information (in other words, the energy required to change the magnetization from one direction to the reverse direction) to a characteristic energy associated with room temperature (an energy which is, expressed in electrical units, approximately one-fortieth of a Volt). So if the energy to flip a magnetic bit is very large, the information can persist for thousands of years […] while if it is very small, the information might only last for a small fraction of a second […] This energy is proportional to the volume of the magnetic bit, and so one immediately sees a problem with making bits smaller and smaller: though you can store bits of information at higher density, there is a very real possibility that the information might be very rapidly scrambled by thermal fluctuations. This motivates the search for materials in which it is very hard to flip the magnetization from one state to the other.”

“The change in the Earth’s magnetic field over time is a fairly noticeable phenomenon. Every decade or so, compass needles in Africa are shifting by a degree, and the magnetic field overall on planet Earth is about 10% weaker than it was in the 19th century.”

Below I have added some links to topics and people covered/mentioned in the book. Many of the links below have likely also been included in some of the other posts about books from the A Brief Introduction OUP physics series which I’ve posted this year – the main point of adding these links is to give some idea what kind of stuff’s covered in the book:

Magnetism.
Magnetite.
Lodestone.
William Gilbert/De Magnete.
Alessandro Volta.
Ampère’s circuital law.
Charles-Augustin de Coulomb.
Hans Christian Ørsted.
Leyden jar
/voltaic cell/battery (electricity).
Solenoid.
Electromagnet.
Homopolar motor.
Michael Faraday.
Electromagnetic induction.
Dynamo.
Zeeman effect.
Alternating current/Direct current.
Nikola Tesla.
Thomas Edison.
Force field (physics).
Ole Rømer.
Centimetre–gram–second system of units.
James Clerk Maxwell.
Maxwell’s equations.
Permittivity.
Permeability (electromagnetism).
Gauss’ law.
Michelson–Morley experiment
.
Special relativity.
Drift velocity.
Curie’s law.
Curie temperature.
Andre Geim.
Diamagnetism.
Paramagnetism.
Exchange interaction.
Magnetic domain.
Domain wall (magnetism).
Stern–Gerlach experiment.
Dirac equation.
Giant magnetoresistance.
Spin valve.
Racetrack memory.
Perpendicular recording.
Bubble memory (“an example of a brilliant idea which never quite made it”, as the author puts it).
Single-molecule magnet.
Spintronics.
Earth’s magnetic field.
Aurora.
Van Allen radiation belt.
South Atlantic Anomaly.
Geomagnetic storm.
Geomagnetic reversal.
Magnetar.
ITER (‘International Thermonuclear Experimental Reactor’).
Antiferromagnetism.
Spin glass.
Quantum spin liquid.
Multiferroics.
Spin ice.
Magnetic monopole.
Ice rules.

August 28, 2017 Posted by | Books, Computer science, Geology, Physics | Leave a comment

Words

The words below are words which I encountered while reading the Rex Stout novels Three Men Out, The Black Mountain, Before Midnight, Three Witnesses, Might as Well Be Dead, Three for the Chair, If Death Ever Slept, And Four to Go, Champagne for One, Plot it Yourself, and Three at Wolfe’s Door.

Colloquy. Chouse. Crass. Carnation. Geste. Jalopy. Squall. Dinghy. Indelibly. Totter. Crock. Chuckhole. Squatty. Paramour. Raceme. Brassy. Scuttlebutt. Ruffle. Lug. Bevy.

Autokinesis. Lilt. Convene. Stole. Chives. Squab. Derogation. Entice. Demimondaine/demirep. Mortarboard. Flattop. Gainsay. Skit. Fraternal. Yowl. Pimiento. Dunce. Ruffian. Creel. Minnow.

Roly-poly. Larrup. Ignominy. Herpetology. Brawny. Scalawag. Mulish. Chartreuse. Moot. Indomitable. Braise. Loll. Peculate. Jostle. Factotum. Billingsgate. Croak. Ramekin. Shirr. Shuck.

Dalliance. Ineluctable. Mull. Fogy. Panicle. Mimeograph. Gimcrack. Blacktop. Capon. Stymie. Impervious. Headlong. Aristology. Fleer. Imputation. Cress. Bestir. Cinch. Cantle. Sudadero.

August 24, 2017 Posted by | Books, Language | Leave a comment

Infectious Disease Surveillance (III)

I have added some more observations from the book below.

“Zoonotic diseases are infections transmitted between animals and humans […]. A recent survey identified more than 1,400 species of human disease–causing agents, over half (58%) of which were zoonotic [2]. Moreover, nearly three-quarters (73%) of infectious diseases considered to be emerging or reemerging were zoonotic [2]. […] In many countries there is minimal surveillance for live animal imports or imported wildlife products. Minimal surveillance prevents the identification of wildlife trade–related health risks to the public, agricultural industry, and native wildlife [36] and has led to outbreaks of zoonotic diseases […] Southeast Asia [is] a hotspot for emerging zoonotic diseases because of rapid population growth, high population density, and high biodiversity […] influenza virus in particular is of zoonotic importance as multiple human infections have resulted from animal exposure [77–79].”

“[R]abies is an important cause of death in many countries, particularly in Africa and Asia [85]. Rabies is still underreported throughout the developing world, and 100-fold underreporting of human rabies is estimated for most of Africa [44]. Reasons for underreporting include lack of public health personnel, difficulties in identifying suspect animals, and limited laboratory capacity for rabies testing. […] Brucellosis […] is transmissible to humans primarily through consumption of unpasteurized milk or dairy products […] Brucella is classified as a category B bioterrorism agent [90] because of its potential for aerosolization [I should perhaps here mention that the book coverage does overlaps a bit with that of Fong & Alibek’s book – which I covered here – but that I decided against covering those topics in much detail here – US] […] The key to preventing brucellosis in humans is to control or eliminate infections in animals [91–93]; therefore, veterinarians are crucial to the identification, prevention, and control of brucellosis [89]. […] Since 1954 [there has been] an ongoing eradication program involving surveillance testing of cattle at slaughter, testing at livestock markets, and whole-herd testing on the farm [in the US] […] Except for endemic brucellosis in wildlife in the Greater Yellowstone Area, all 50 states and territories in the United States are free of bovine brucellosis [94].”

“Because of its high mortality rate in humans in the absence of early treatment, Y. pestis is viewed as one of the most pathogenic human bacteria [101]. In the United States, plague is most often found in the Southwest where it is transmitted by fleas and maintained in rodent populations [102]. Deer mice and voles typically serve as maintenance hosts [and] these animals are often resistant to plague [102]. In contrast, in amplifying host species such as prairie dogs, ground squirrels, chipmunks, and wood rats, plague spreads rapidly and results in high mortality [103]. […] Human infections with Y. pestis can result in bubonic, pneumonic, or septicemic plague, depending on the route of exposure. Bubonic plague is most common; however, pneumonic plague poses a more serious public health risk since it can be easily transmitted person-to-person through inhalation of aerosolized bacteria […] Septicemic plague is characterized by bloodstream infection with Y. pestis and can occur secondary to pneumonic or bubonic forms of infection or as a primary infection [6,60].
Plague outbreaks are often correlated with animal die-offs in the area [104], and rodent control near human residences is important to prevent disease [103]. […] household pets can be an important route of plague transmission and flea control in dogs and cats is an important prevention measure [105]. Plague surveillance involves monitoring three populations for infection: vectors (e.g., fleas), humans, and rodents [106]. In the past 20 years, the numbers of human cases of plague reported in the United States have varied from 1 to 17 cases per year [90]. […]
Since rodent species are the main reservoirs of the bacteria, these animals can be used for sentinel surveillance to provide an early warning of the public health risk to humans [106]. […] Rodent die-offs can often be an early indicator of a plague outbreak”.

“Zoonotic disease surveillance is crucial for protection of human and animal health. An integrated, sustainable system that collects data on incidence of disease in both animals and humans is necessary to ensure prompt detection of zoonotic disease outbreaks and a timely and focused response [34]. Currently, surveillance systems for animals and humans [operate] largely independently [34]. This results in an inability to rapidly detect zoonotic diseases, particularly novel emerging diseases, that are detected in the human population only after an outbreak occurs [109]. While most industrialized countries have robust disease surveillance systems, many developing countries currently lack the resources to conduct both ongoing and real-time surveillance [34,43].”

“Acute hepatitis of any cause has similar, usually indistinguishable, signs and symptoms. Acute illness is associated with fever, fatigue, nausea, abdominal pain, followed by signs of liver dysfunction, including jaundice, light to clay-colored stool, dark urine, and easy bruising. The jaundice, dark urine, and abnormal stool are because of the diminished capacity of the inflamed liver to handle the metabolism of bilirubin, which is a breakdown product of hemoglobin released as red blood cells are normally replaced. In severe hepatitis that is associated with fulminant liver disease, the liver’s capacity to produce clotting factors and to clear potential toxic metabolic products is severely impaired, with resultant bleeding and hepatic encephalopathy. […] An effective vaccine to prevent hepatitis A has been available for more than 15 years, and incidence rates of hepatitis A are dropping wherever it is used in routine childhood immunization programs. […] Currently, hepatitis A vaccine is part of the U.S. childhood immunization schedule recommended by the Advisory Committee on Immunization Practices (ACIP) [31].”

Chronic hepatitis — persistent and ongoing inflammation that can result from chronic infection — usually has minimal to no signs or symptoms […] Hepatitis B and C viruses cause acute hepatitis as well as chronic hepatitis. The acute component is often not recognized as an episode of acute hepatitis, and the chronic infection may have little or no symptoms for many years. With hepatitis B, clearance of infection is age related, as is presentation with symptoms. Over 90% of infants exposed to HBV develop chronic infection, while <1% have symptoms; 5–10% of adults develop chronic infection, but 50% or more have symptoms associated with acute infection. Among those who acquire hepatitis C, 15–45% clear the infection; the remainder have lifelong infection unless treated specifically for hepatitis C.”

“[D]ata are only received on individuals accessing care. Asymptomatic acute infection and poor or unavailable measurements for high risk populations […] have resulted in questionable estimates of the prevalence and incidence of hepatitis B and C. Further, a lack of understanding of the different types of viral hepatitis by many medical providers [18] has led to many undiagnosed individuals living with chronic infection, who are not captured in disease surveillance systems. […] Evaluation of acute HBV and HCV surveillance has demonstrated a lack of sensitivity for identifying acute infection in injection drug users; it is likely that most cases in this population go undetected, even if they receive medical care [36]. […] Best practices for conducting surveillance for chronic hepatitis B and C are not well established. […] The role of health departments in responding to infectious diseases is typically responding to acute disease. Response to chronic HBV infection is targeted to prevention of transmission to contacts of those infected, especially in high risk situations. Because of the high risk of vertical transmission and likely development of chronic disease in exposed newborns, identification and case management of HBV-infected pregnant women and their infants is a high priority. […] For a number of reasons, states do not conduct uniform surveillance for chronic hepatitis C. There is not agreement as to the utility of surveillance for chronic HCV infection, as it is a measurement of prevalent rather than incident cases.”

“Among all nationally notifiable diseases, three STDs (chlamydia, gonorrhea, and syphilis) are consistently in the top five most commonly reported diseases annually. These three STDs made up more than 86% of all reported diseases in the United States in 2010 [2]. […] The true burden of STDs is likely to be higher, as most infections are asymptomatic [4] and are never diagnosed or reported. A synthesis of a variety of data sources estimated that in 2008 there were over 100 million prevalent STDs and nearly 20 million incident STDs in the United States [5]. […] Nationally, 72% of all reported STDs are among persons aged 15–24 years [3], and it is estimated that 1 in 4 females aged 14–19 has an STD [7]. […] In 2011, the rates of chlamydia, gonorrhea, and primary and secondary syphilis among African-­Americans were, respectively, 7.5, 16.9, and 6.7 times the rates among whites [3]. Additionally, men who have sex with men (MSM) are disproportionately infected with STDs. […] several analyses have shown risk ratios above 100 for the associations between being an MSM and having syphilis or HIV [9,10]. […] Many STDs can be transmitted congenitally during pregnancy or birth. In 2008, over 400,000 neonatal deaths and stillbirths were associated with syphilis worldwide […] untreated chlamydia and gonorrhea can cause ophthalmia neonatorum in newborns, which can result in blindness [13]. The medical and societal costs for STDs are high. […] One estimate in 2008 put national costs at $15.6 billion [15].”

“A significant challenge in STD surveillance is that the term “STD” encompasses a variety of infections. Currently, there are over 35 pathogens that can be transmitted sexually, including bacteria […] protozoa […] and ectoparasites […]. Some infections can cause clinical syndromes shortly after exposure, whereas others result in no symptoms or have a long latency period. Some STDs can be easily diagnosed using self-collected swabs, while others require a sample of blood or a physical examination by a clinician. Consequently, no one particular surveillance strategy works for all STDs. […] The asymptomatic nature of most STDs limits inferences from case­-based surveillance, since in order to be counted in this system an infection must be diagnosed and reported. Additionally, many infections never result in disease. For example, an estimated 90% of human papillomavirus (HPV) infections resolve on their own without sequelae [24]. As such, simply counting infections may not be appropriate, and sequelae must also be monitored. […] Strategies for STD surveillance include case reporting; sentinel surveillance; opportunistic surveillance, including use of administrative data and positivity in screened populations; and population-­based studies […] the choice of strategy depends on the type of STD and the population of interest.”

“Determining which diseases and conditions should be included in mandatory case reporting requires balancing the benefits to the public health system (e.g., utility of the data) with the costs and burdens of case reporting. While many epidemiologists and public health practitioners follow the mantra “the more data, the better,” the costs (in both dollars and human resources) of developing and maintaining a robust case­-based reporting system can be large. Case­-based surveillance has been mandated for chlamydia, gonorrhea, syphilis, and chancroid nationally; but expansion of state­-initiated mandatory reporting for other STDs is controversial.”

August 18, 2017 Posted by | Books, Epidemiology, Immunology, Infectious disease, Medicine | Leave a comment

Depression and Heart Disease (II)

Below I have added some more observations from the book, which I gave four stars on goodreads.

“A meta-analysis of twin (and family) studies estimated the heritability of adult MDD around 40% [16] and this estimate is strikingly stable across different countries [17, 18]. If measurement error due to unreliability is taken into account by analysing MDD assessed on two occasions, heritability estimates increase to 66% [19]. Twin studies in children further show that there is already a large genetic contribution to depressive symptoms in youth, with heritability estimates varying between 50% and 80% [20–22]. […] Cardiovascular research in twin samples has suggested a clear-cut genetic contribution to hypertension (h2 = 61%) [30], fatal stroke (h2 = 32%) [31] and CAD (h2 = 57% in males and 38% in females) [32]. […] A very important, and perhaps underestimated, source of pleiotropy in the association of MDD and CAD are the major behavioural risk factors for CAD: smoking and physical inactivity. These factors are sometimes considered ‘environmental’, but twin studies have shown that such behaviours have a strong genetic component [33–35]. Heritability estimates for [many] established risk factors [for CAD – e.g. BMI, smoking, physical inactivity – US] are 50% or higher in most adult twin samples and these estimates remain remarkably similar across the adult life span [41–43].”

“The crucial question is whether the genetic factors underlying MDD also play a role in CAD and CAD risk factors. To test for an overlap in the genetic factors, a bivariate extension of the structural equation model for twin data can be used [57]. […] If the depressive symptoms in a twin predict the IL-6 level in his/her co-twin, this can only be explained by an underlying factor that affects both depression and IL-6 levels and is shared by members of a family. If the prediction is much stronger in MZ than in DZ twins, this signals that the underlying factor is their shared genetic make-up, rather than their shared (family) environment. […] It is important to note clearly here that genetic correlations do not prove the existence of pleiotropy, because genes that influence MDD may, through causal effects of MDD on CAD risk, also become ‘CAD genes’. The absence of a genetic correlation, however, can be used to falsify the existence of genetic pleiotropy. For instance, the hypothesis that genetic pleiotropy explains part of the association between depressive symptoms and IL-6 requires the genetic correlation between these traits to be significantly different from zero. [Furthermore,] the genetic correlation should have a positive value. A negative genetic correlation would signal that genes that increase the risk for depression decrease the risk for higher IL-6 levels, which would go against the genetic pleiotropy hypothesis. […] Su et al. [26] […] tested pleiotropy as a possible source of the association of depressive symptoms with Il-6 in 188 twin pairs of the Vietnam Era Twin (VET) Registry. The genetic correlation between depressive symptoms and IL-6 was found to be positive and significant (RA = 0.22, p = 0.046)”

“For the association between MDD and physical inactivity, the dominant hypothesis has not been that MDD causes a reduction in regular exercise, but instead that regular exercise may act as a protective factor against mood disorders. […] we used the twin method to perform a rigorous test of this popular hypothesis [on] 8558 twins and their family members using their longitudinal data across 2-, 4-, 7-, 9- and 11-year follow-up periods. In spite of sufficient statistical power, we found only the genetic correlation to be significant (ranging between *0.16 and *0.44 for different symptom scales and different time-lags). The environmental correlations were essentially zero. This means that the environmental factors that cause a person to take up exercise do not cause lower anxiety or depressive symptoms in that person, currently or at any future time point. In contrast, the genetic factors that cause a person to take up exercise also cause lower anxiety or depressive symptoms in that person, at the present and all future time points. This pattern of results falsifies the causal hypothesis and leaves genetic pleiotropy as the most likely source for the association between exercise and lower levels of anxiety and depressive symptoms in the population at large. […] Taken together, [the] studies support the idea that genetic pleiotropy may be a factor contributing to the increased risk for CAD in subjects suffering from MDD or reporting high counts of depressive symptoms. The absence of environmental correlations in the presence of significant genetic correlations for a number of the CAD risk factors (CFR, cholesterol, inflammation and regular exercise) suggests that pleiotropy is the sole reason for the association between MDD and these CAD risk factors, whereas for other CAD risk factors (e.g. smoking) and CAD incidence itself, pleiotropy may coexist with causal effects.”

“By far the most tested polymorphism in psychiatric genetics is a 43-base pair insertion or deletion in the promoter region of the serotonin transporter gene (5HTT, renamed SLC6A4). About 55% of Caucasians carry a long allele (L) with 16 repeat units. The short allele (S, with 14 repeat units) of this length polymorphism repeat (LPR) reduces transcriptional efficiency, resulting in decreased serotonin transporter expression and function [83]. Because serotonin plays a key role in one of the major theories of MDD [84], and because the most prescribed antidepressants act directly on this transporter, 5HTT is an obvious candidate gene for this disorder. […] The dearth of studies attempting to associate the 5HTTLPR to MDD or related personality traits tells a revealing story about the fate of most candidate genes in psychiatric genetics. Many conflicting findings have been reported, and the two largest studies failed to link the 5HTTLPR to depressive symptoms or clinical MDD [85, 86]. Even at the level of reviews and meta-analyses, conflicting conclusions have been drawn about the role of this polymorphism in the development of MDD [87, 88]. The initially promising explanation for discrepant findings – potential interactive effects of the 5HTTLPR and stressful life events [89] – did not survive meta-analysis [90].”

“Across the board, overlooking the wealth of candidate gene studies on MDD, one is inclined to conclude that this approach has failed to unambiguously identify genetic variants involved in MDD […]. Hope is now focused on the newer GWA [genome wide association] approach. […] At the time of writing, only two GWA studies had been published on MDD [81, 95]. […] In theory, the strategy to identify potential pleiotropic genes in the MDD–CAD relationship is extremely straightforward. We simply select the genes that occur in the lists of confirmed genes from the GWA studies for both traits. In practice, this is hard to do, because genetics in psychiatry is clearly lagging behind genetics in cardiology and diabetes medicine. […] What is shown by the reviewed twin studies is that some genetic variants may influence MDD and CAD risk factors. This can occur through one of three mechanisms: (a) the genetic variants that increase the risk for MDD become part of the heritability of CAD through a causal effect of MDD on CAD risk factors (causality); (b) the genetic variants that increase the risk for CAD become part of the heritability of MDD through a direct causal effect of CAD on MDD (reverse causality); (c) the genetic variants influence shared risk factors that independently increase the risk for MDD as well as CAD (pleiotropy). I suggest that to fully explain the MDD–CAD association we need to be willing to be open to the possibility that these three mechanisms co-exist. Even in the presence of true pleiotropic effects, MDD may influence CAD risk factors, and having CAD in turn may worsen the course of MDD.”

“Patients with depression are more likely to exhibit several unhealthy behaviours or avoid other health-promoting ones than those without depression. […] Patients with depression are more likely to have sleep disturbances [6]. […] sleep deprivation has been linked with obesity, diabetes and the metabolic syndrome [13]. […] Physical inactivity and depression display a complex, bidirectional relationship. Depression leads to physical inactivity and physical inactivity exacerbates depression [19]. […] smoking rates among those with depression are about twice that of the general population [29]. […] Poor attention to self-care is often a problem among those with major depressive disorder. In the most severe cases, those with depression may become inattentive to their personal hygiene. One aspect of this relationship that deserves special attention with respect to cardiovascular disease is the association of depression and periodontal disease. […] depression is associated with poor adherence to medical treatment regimens in many chronic illnesses, including heart disease. […] There is some evidence that among patients with an acute coronary syndrome, improvement in depression is associated with improvement in adherence. […] Individuals with depression are often socially withdrawn or isolated. It has been shown that patients with heart disease who are depressed have less social support [64], and that social isolation or poor social support is associated with increased mortality in heart disease patients [65–68]. […] [C]linicians who make recommendations to patients recovering from a heart attack should be aware that low levels of social support and social isolation are particularly common among depressed individuals and that high levels of social support appear to protect patients from some of the negative effects of depression [78].”

“Self-efficacy describes an individual’s self-confidence in his/her ability to accomplish a particular task or behaviour. Self-efficacy is an important construct to consider when one examines the psychological mechanisms linking depression and heart disease, since it influences an individual’s engagement in behaviour and lifestyle changes that may be critical to improving cardiovascular risk. Many studies on individuals with chronic illness show that depression is often associated with low self-efficacy [95–97]. […] Low self-efficacy is associated with poor adherence behaviour in patients with heart failure [101]. […] Much of the interest in self-efficacy comes from the fact that it is modifiable. Self-efficacy-enhancing interventions have been shown to improve cardiac patients’ self-efficacy and thereby improve cardiac health outcomes [102]. […] One problem with targeting self-efficacy in depressed heart disease patients is [however] that depressive symptoms reduce the effects of self-efficacy-enhancing interventions [105, 106].”

“Taken together, [the] SADHART and ENRICHD [studies] suggest, but do not prove, that antidepressant drug therapy in general, and SSRI treatment in particular, improve cardiovascular outcomes in depressed post-acute coronary syndrome (ACS) patients. […] even large epidemiological studies of depression and antidepressant treatment are not usually informative, because they confound the effects of depression and antidepressant treatment. […] However, there is one Finnish cohort study in which all subjects […] were followed up through a nationwide computerised database [17]. The purpose of this study was not to examine the relationship between depression and cardiac mortality, but rather to look at the relationship between antidepressant use and suicide. […] unexpectedly, ‘antidepressant use, and especially SSRI use, was associated with a marked reduction in total mortality (=49%, p < 0.001), mostly attributable to a decrease in cardiovascular deaths’. The study involved 15 390 patients with a mean follow-up of 3.4 years […] One of the marked differences between the SSRIs and the earlier tricyclic antidepressants is that the SSRIs do not cause cardiac death in overdose as the tricyclics do [41]. There has been literature that suggested that tricyclics even at therapeutic doses could be cardiotoxic and more problematic than SSRIs [42, 43]. What has been surprising is that both in the clinical trial data from ENRICHD and the epidemiological data from Finland, tricyclic treatment has also been associated with a decreased risk of mortality. […] Given that SSRI treatment of depression in the post-ACS period is safe, effective in reducing depressed mood, able to improve health behaviours and may reduce subsequent cardiac morbidity and mortality, it would seem obvious that treating depression is strongly indicated. However, the vast majority of post-ACS patients will not see a psychiatrically trained professional and many cases are not identified [33].”

“That depression is associated with cardiovascular morbidity and mortality is no longer open to question. Similarly, there is no question that the risk of morbidity and mortality increases with increasing severity of depression. Questions remain about the mechanisms that underlie this association, whether all types of depression carry the same degree of risk and to what degree treating depression reduces that risk. There is no question that the benefits of treating depression associated with coronary artery disease far outweigh the risks.”

“Two competing trends are emerging in research on psychotherapy for depression in cardiac patients. First, the few rigorous RCTs that have been conducted so far have shown that even the most efficacious of the current generation of interventions produce relatively modest outcomes. […] Second, there is a growing recognition that, even if an intervention is highly efficacious, it may be difficult to translate into clinical practice if it requires intensive or extensive contacts with a highly trained, experienced, clinically sophisticated psychotherapist. It can even be difficult to implement such interventions in the setting of carefully controlled, randomised efficacy trials. Consequently, there are efforts to develop simpler, more efficient interventions that can be delivered by a wider variety of interventionists. […] Although much more work remains to be done in this area, enough is already known about psychotherapy for comorbid depression in heart disease to suggest that a higher priority should be placed on translation of this research into clinical practice. In many cases, cardiac patients do not receive any treatment for their depression.”

August 14, 2017 Posted by | Books, Cardiology, Diabetes, Genetics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment