i. “Only the most uncritical minds are free from doubt.” (Aldo Leopold)
ii. “If you do not tell the truth about yourself you cannot tell it about other people.” (Virginia Woolf)
iii. “Though we see the same world, we see it through different eyes.” (-ll-)
iv. “No greater mistake can be made than to think that our institutions are fixed or may not be changed for the worse.” (Charles Evans Hughes)
v. “The image of ourselves in the minds of others is the picture of a stranger we shall never see.” (Elizabeth Bibesco)
vi. “Everybody continually tries to get away with as much as he can; and society is a marvelous machine which allows decent people to be cruel without realizing it.” (Émile Chartier)
vii. “When a man steals your wife, there is no better revenge than to let him keep her.” (Sacha Guitry)
viii. “Equipped with his five senses, man explores the universe around him and calls the adventure Science.” (Edwin Hubble)
ix. “There are two kinds of fools: one says, “This is old, therefore it is good”; the other says, “This is new, therefore it is better.” (William Ralph Inge)
x. “We know too many things that are not true.” (Charles Kettering)
xi. “There are truths which one can only say after having won the right to say them.” (Jean Cocteau)
xii. “Where all think alike, no one thinks very much.” (Walter Lippmann)
xiii. “It requires wisdom to understand wisdom: the music is nothing if the audience is deaf.” (-ll-)
xiv. “The past is a foreign country; they do things differently there.” (L.P. Hartley)
xv. “To know is not too demanding: it merely requires memory and time. But to understand is quite a different matter: it requires intellectual ability and training, a self conscious awareness of what one is doing, experience in techniques of analysis and synthesis, and above all, perspective.” (Carroll Quigley)
xvi. “The basis of social relationships is reciprocity: if you cooperate with others, others will cooperate with you.” (-ll-. But be careful…)
xvii. “Self-pity? I see no moral objections to it, the smell drives people away, but that’s a practical objection, and occasionally an advantage.” (E. M. Forster)
xviii. “You are neither right nor wrong because people agree with you.” (Benjamin Graham)
xix. “Men substitute words for reality and then argue about the words.” (Edwin Howard Armstrong)
xx. “Science aims at constructing a world which shall be symbolic of the world of commonplace experience. It is not at all necessary that every individual symbol that is used should represent something in common experience or even something explicable in terms of common experience. The man in the street is always making this demand for concrete explanation of the things referred to in science; but of necessity he must be disappointed. It is like our experience in learning to read. That which is written in a book is symbolic of a story in real life. The whole intention of the book is that ultimately a reader will identify some symbol, say BREAD, with one of the conceptions of familiar life. But it is mischievous to attempt such identifications prematurely, before the letters are strung into words and the words into sentences. The symbol A is not the counterpart of anything in familiar life.” (Arthur Eddington)
Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.
“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema . Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age  increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance  as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”
“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals .”
“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEER – incidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US]  show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients  and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.
“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”
“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms . These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc . These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.
“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”
“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery , hip fractures  and oncology  have now begun to recognise frailty as an important predictor of mortality and morbidity.”
I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:
“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up . […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”
“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation . […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer . Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts . […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”
“This book is written to provide […] a useful balance of theoretical treatment, description of empirical analyses and breadth of content for use in undergraduate modules in health economics for economics students, and for students taking a health economics module as part of their postgraduate training. Although we are writing from a UK perspective, we have attempted to make the book as relevant internationally as possible by drawing on examples, case studies and boxed highlights, not just from the UK, but from a wide range of countries”
I’m currently reading this book. The coverage has been somewhat disappointing because it’s mostly an undergraduate text which has so far mainly been covering concepts and ideas I’m already familiar with, but it’s not terrible – just okay-ish. I have added some observations from the first half of the book below.
“Health economics is the application of economic theory, models and empirical techniques to the analysis of decision making by people, health care providers and governments with respect to health and health care. […] Health economics has evolved into a highly specialised field, drawing on related disciplines including epidemiology, statistics, psychology, sociology, operations research and mathematics […] health economics is not shorthand for health care economics. […] Health economics studies not only the provision of health care, but also how this impacts on patients’ health. Other means by which health can be improved are also of interest, as are the determinants of ill-health. Health economics studies not only how health care affects population health, but also the effects of education, housing, unemployment and lifestyles.”
“Economic analyses have been used to explain the rise in obesity. […] The studies show that reasons for the rise in obesity include: *Technological innovation in food production and transportation that has reduced the cost of food preparation […] *Agricultural innovation and falling food prices that has led to an expansion in food supply […] *A decline in physical activity, both at home and at work […] *An increase in the number of fast-food outlets, resulting in changes to the relative prices of meals […]. *A reduction in the prevalence of smoking, which leads to increases in weight (Chou et al., 2004).”
“[T]he evidence is that ageing is in reality a relatively small factor in rising health care costs. The popular view is known as the ‘expansion of morbidity’ hypothesis. Gruenberg (1977) suggested that the decline in mortality that has led to an increase in the number of older people is because fewer people die from illnesses that they have, rather than because disease incidence and prevalence are lower. Lower mortality is therefore accompanied by greater morbidity and disability. However, Fries (1980) suggested an alternative hypothesis, ‘compression of morbidity’. Lower mortality rates are due to better health amongst the population, so people not only live longer, they are in better health when old. […] Zweifel et al. (1999) examined the hypothesis that the main determinant of high health care costs amongst older people is not the time since they were born, but the time until they die. Their results, confirmed by many subsequent studies, is that proximity to death does indeed explain higher health care costs better than age per se. Seshamani and Gray (2004) estimated that in the UK this is a factor up to 15 years before death, and annual costs increase tenfold during the last 5 years of life. The consensus is that ageing per se contributes little to the continuing rise in health expenditures that all countries face. Much more important drivers are improved quality of care, access to care, and more expensive new technology.”
“The difference between AC [average cost] and MC [marginal cost] is very important in applied health economics. Very often data are available on the average cost of health care services but not on their marginal cost. However, using average costs as if they were marginal costs may mislead. For example, hospital costs will be reduced by schemes that allow some patients to be treated in the community rather than being admitted. Given data on total costs of inpatient stays, it is possible to calculate an average cost per patient. It is tempting to conclude that avoiding an admission will reduce costs by that amount. However, the average includes patients with different levels of illness severity, and the more severe the illness the more costly they will be to treat. Less severely ill patients are most likely to be suitable for treatment in the community, so MC will be lower than AC. Such schemes will therefore produce a lower cost reduction than the estimate of AC suggests.
A problem with multi-product cost functions is that it is not possible to define meaningfully what the AC of a particular product is. If different products share some inputs, the costs of those inputs cannot be solely attributed to any one of them. […] In practice, when multi-product organisations such as hospitals calculate costs for particular products, they use accounting rules to share out the costs of all inputs and calculate average not marginal costs.”
“Studies of economies of scale in the health sector do not give a consistent and generalisable picture. […] studies of scope economies [also] do not show any consistent and generalisable picture. […] The impact of hospital ownership type on a range of key outcomes is generally ambiguous, with different studies yielding conflicting results. […] The association between hospital ownership and patient outcomes is unclear. The evidence is mixed and inconclusive regarding the impact of hospital ownership on access to care, morbidity, mortality, and adverse events.“
“Public goods are goods that are consumed jointly by all consumers. The strict economics definition of a public good is that they have two characteristics. The first is non-rivalry. This means that the consumption of a good or service by one person does not prevent anyone else from consuming it. Non-rival goods therefore have large marginal external benefits, which make them socially very desirable but privately unprofitable to provide. Examples of nonrival goods are street lighting and pavements. The second is non-excludability. This means that it is not possible to provide a good or service to one person without letting others also consume it. […] This may lead to a free-rider problem, in which people are unwilling to pay for goods and services that are of value to them. […] Note the distinction between public goods, which are goods and services that are non-rival and non-excludable, and publicly provided goods, which are goods or services that are provided by the government for any reason. […] Most health care products and services are not public goods because they are both rival and excludable. […] However, some health care, particularly public health programmes, does have public good properties.”
“[H]ealth care is typically consumed under conditions of uncertainty with respect to the timing of health care expenditure […] and the amount of expenditure on health care that is required […] The usual solution to such problems is insurance. […] Adverse selection exists when exactly the wrong people, from the point of view of the insurance provider, choose to buy insurance: those with high risks. […] Those who are most likely to buy health insurance are those who have a relatively high probability of becoming ill and maybe also incur greater costs than the average when they are ill. […] Adverse selection arises because of the asymmetry of information between insured and insurer. […] Two approaches are adopted to prevent adverse selection. The first is experience rating, where the insurance provider sets a different insurance premium for different risk groups. Those who apply for health insurance might be asked to undergo a medical examination and
to disclose any relevant facts concerning their risk status. […] There are two problems with this approach. First, the cost of acquiring the appropriate information may be high. […] Secondly, it might encourage insurance providers to ‘cherry pick’ people, only choosing to provide insurance to the low risk. This may mean that high-risk people are unable to obtain health insurance at all. […] The second approach is to make health insurance compulsory. […] The problem with this is that low-risk people effectively subsidise the health insurance payments of those with higher risks, which may be regarded […] as inequitable.”
“Health insurance changes the economic incentives facing both the consumers and the providers of health care. One manifestation of these changes is the existence of moral hazard. This is a phenomenon common to all forms of insurance. The suggestion is that when people are insured against risks and their consequences, they are less careful about minimising them. […] Moral hazard arises when it is possible to alter the probability of the insured event, […] or the size of the insured loss […] The extent of the problem depends on the price elasticity of demand […] Three main mechanisms can be used to reduce moral hazard. The first is co-insurance. Many insurance policies require that when an event occurs the insured shares the insured loss […] with the insurer. The co-insurance rate is the percentage of the insured loss that is paid by the insured. The co-payment is the amount that they pay. […] The second is deductibles. A deductible is an amount of money the insured pays when a claim is made irrespective of co-insurance. The insurer will not pay the insured loss unless the deductible is paid by the insured. […] The third is no-claims bonuses. These are payments made by insurers to discourage claims. They usually take the form of reduced insurance premiums in the next period. […] No-claims bonuses typically discourage insurance claims where the payout by the insurer is small. “
“The method of reimbursement relates to the way in which health care providers are paid for the services they provide. It is useful to distinguish between reimbursement methods, because they can affect the quantity and quality of health care. […] Retrospective reimbursement at full cost means that hospitals receive payment in full for all health care expenditures incurred in some pre-specified period of time. Reimbursement is retrospective in the sense that not only are hospitals paid after they have provided treatment, but also in that the size of the payment is determined after treatment is provided. […] Which model is used depends on whether hospitals are reimbursed for actual costs incurred, or on a fee-for-service (FFS) basis. […] Since hospital income [in these models] depends on the actual costs incurred (actual costs model) or on the volume of services provided (FFS model) there are few incentives to minimise costs. […] Prospective reimbursement implies that payments are agreed in advance and are not directly related to the actual costs incurred. […] incentives to reduce costs are greater, but payers may need to monitor the quality of care provided and access to services. If the hospital receives the same income regardless of quality, there is a financial incentive to provide low-quality care […] The problem from the point of view of the third-party payer is how best to monitor the activities of health care providers, and how to encourage them to act in a mutually beneficial way. This problem might be reduced if health care providers and third-party payers are linked in some way so that they share common goals. […] Integration between third-party payers and health care providers is a key feature of managed care.“
One of the prospective imbursement models applied today may be of particular interest to Danes, as the DRG system is a big part of the financial model of the Danish health care system – so I’ve added a few details about this type of system below:
“An example of prospectively set costs per case is the diagnostic-related groups (DRG) pricing scheme introduced into the Medicare system in the USA in 1984, and subsequently used in a number of other countries […] Under this scheme, DRG payments are based on average costs per case in each diagnostic group derived from a sample of hospitals. […] Predicted effects of the DRG pricing scheme are cost shifting, patient shifting and DRG creep. Cost shifting and patient shifting are ways of circumventing the cost-minimising effects of DRG pricing by shifting patients or some of the services provided to patients out of the DRG pricing scheme and into other parts of the system not covered by DRG pricing. For example, instead of being provided on an inpatient basis, treatment might be provided on an outpatient basis where it is reimbursed retrospectively. DRG creep arises when hospitals classify cases into DRGs that carry a higher payment, indicating that they are more complicated than they really are. This might arise, for instance, when cases have multiple diagnoses.”
I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.
“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”
“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.
The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”
“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”
“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”
“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”
“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”
“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”
“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”
Some links of interest:
Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). Breccia. Aeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.
Here’s the link. I don’t usually cover this sort of stuff, but I have quoted extensively from the report below because this is some nice data, and nice data sometimes disappear from the internet if you don’t copy it in time.
The sample sizes here are large (“The total number of respondents was 10,195 (c. 1,000 per country).”) and a brief skim of the wiki article about Chatham House hardly gives the impression that this is an extreme right-wing think tank with a hidden agenda (for example Hilary Clinton received the Chatham House Prize just a few years ago). Data was gathered online, which of course might lead to slightly different results than offline data procurement strategies, but if anything this to me seems to imply that the opposition seen in the data might more likely be a lower bound estimate than an upper bound estimate; older people, rural people and people with lower education levels are all more opposed than their counterparts, according to the data, and these people are less likely to be online, so they should probably all else equal be expected if anything to be under-sampled in a data set relying exclusively on data provided online. Note incidentally that if you wanted to you could probably sort of infer some implicit effect sizes; e.g. by comparing the differences relating to age and education, it seems that age is the far more important variable, at least if your interest is in the people who agree with the statement provided by Chatham House (of course when you only have data like this you should be very careful about making inferences about the importance of specific variables, but I can’t help noting here that part of the education variable/effect may just be a hidden age effect; I’m reasonably certain education levels have increased over time in all countries surveyed).
“Drawing on a unique, new Chatham House survey of more than 10,000 people from 10 European states, we can throw new light on what people think about migration from mainly Muslim countries. […] respondents were given the following statement: ‘All further migration from mainly Muslim countries should be stopped’. They were then asked to what extent did they agree or disagree with this statement. Overall, across all 10 of the European countries an average of 55% agreed that all further migration from mainly Muslim countries should be stopped, 25% neither agreed nor disagreed and 20% disagreed.
Majorities in all but two of the ten states agreed, ranging from 71% in Poland, 65% in Austria, 53% in Germany and 51% in Italy to 47% in the United Kingdom and 41% in Spain. In no country did the percentage that disagreed surpass 32%.”
“Public opposition to further migration from Muslim states is especially intense in Austria, Poland, Hungary, France and Belgium, despite these countries having very different sized resident Muslim populations. In each of these countries, at least 38% of the sample ‘strongly agreed’ with the statement. […] across Europe, opposition to Muslim immigration is especially intense among retired, older age cohorts while those aged below 30 are notably less opposed. There is also a clear education divide. Of those with secondary level qualifications, 59% opposed further Muslim immigration. By contrast, less than half of all degree holders supported further migration curbs.”
“Of those living in rural, less populated areas, 58% are opposed to further Muslim immigration. […] among those based in cities and metropolitan areas just over half agree with the statement and around a quarter are less supportive of a ban. […] nearly two-thirds of those who feel they don’t have control over their own lives [supported] the statement. Similarly, 65% of those Europeans who are dissatisfied with their life oppose further migration from Muslim countries. […] These results chime with other surveys exploring attitudes to Islam in Europe. In a Pew survey of 10 European countries in 2016, majorities of the public had an unfavorable view of Muslims living in their country in five countries: Hungary (72%), Italy (69%), Poland (66%), Greece (65%), and Spain (50%), although those numbers were lower in the UK (28%), Germany (29%) and France (29%). There was also a widespread perception in many countries that the arrival of refugees would increase the likelihood of terrorism, with a median of 59% across ten European countries holding this view.”
I’ve usually in the past combined these lists with other stuff, but I am now strongly considering making these lists into posts of their own in order to make a potential lack of ‘other stuff’ to include in such posts less likely to stop me from posting the words; stuff I don’t blog is more likely to get lost to my memory, so I don’t want to give myself any more excuses not to blog stuff I want to remember/learn than I have to. Most of the words are from books I’ve read over the last weeks, I rarely spend time on vocabulary.com these days (I don’t encounter enough new words on the site these days to justify a significant amount of activity there; there are too many review questions, likely a result of me having mastered words much faster than they’ve added new ones..).
I’ve by now decided to stop (more-or-less…-) systematically checking in each case if I’ve already included a word on a similar list in a previous post; not all the words on these lists from now on will necessarily be ‘new’ to me (to the extent that the words on the previous lists have been, that is…) – so some of these words (and the words to come, assuming other posts will follow) are likely just words I’ve forgot about, and some are words I simply consider to be ‘nice’/’unappreciated’/’not encountered often enough’… I decided to split the words in this post up into smaller groups of words, as one big chunk of words looked slightly ‘scary’ and unapproachable to me. There’s no system to the groupings, the words were originally randomly added to a list I keep of words I knew I’d want to get back to at some point and the cut-offs I later applied when writing this post were more or less completely arbitrary. If you want non-arbitrary groups of interesting words, I refer to the goodreads lists.
“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”
I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).
Sample observations from the book:
“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”
“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”
“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”
“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”
“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”
“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”
“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”
“Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”
“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.
Links of interest:
Arthur Ernest Guedel.
Henry Hill Hickman.
William Thomas Green Morton.
James Young Simpson.
Joseph Thomas Clover.
Principles of Total Intravenous Anaesthesia (TIVA).
Laryngeal mask airway.
Gate control theory of pain.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)
The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.
Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.
“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.
In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.
The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”
“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here…] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”
“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”
“About 400 billion neutrinos from the Sun pass through each one of us each second.”
“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”
Weak interaction (‘good article’).
Resonance (particle physics).
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Sudbury Neutrino Observatory.
W and Z bosons.
Electroweak interaction (/theory).
Charm (quantum number).
Inverse beta decay.
Below is a list of books I’ve read in 2017.
The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.
As usual I’ll try to update the post regularly throughout the year.
1. Brief Candles (3, f). Manning Coles.
5. All Clear (5, f). Connie Willis.
6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.
13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.
17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.
19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.
“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”
“Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”
“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”
“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”
“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.
“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”
“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”
Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:
Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
Intensive and extensive properties.
Conservation of energy.
Microscopic view of heat.
Reversible process (thermodynamics).
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.
“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”
“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”
“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”
“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”
“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”
“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”
“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”
“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”
“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”
“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”
Links of interest:
Galaxy (‘featured article’).
The Great Debate.
Henrietta Swan Leavitt (‘good article’).
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Type Ia supernova.
Cosmic microwave background.
Cold dark matter.
Active galactic nucleus.
Hubble Ultra-Deep Field.
Ultimate fate of the universe.
Some quotes from the book below.
“Tests that are used in clinical neuropsychology in most cases examine one or more aspects of cognitive domains, which are theoretical constructs in which a multitude of cognitive processes are involved. […] By definition, a subdivision in cognitive domains is arbitrary, and many different classifications exist. […] for a test to be recommended, several criteria must be met. First, a test must have adequate reliability: the test must yield similar outcomes when applied over multiple test sessions, i.e., have good test–retest reliability. […] Furthermore, the interobserver reliability is important, in that the test must have a standardized assessment procedure and is scored in the same manner by different examiners. Second, the test must have adequate validity. Here, different forms of validity are important. Content validity is established by expert raters with respect to item formulation, item selection, etc. Construct validity refers to the underlying theoretical construct that the test is assumed to measure. To assess construct validity, both convergent and divergent validities are important. Convergent validity refers to the amount of agreement between a given test and other tests that measure the same function. In turn, a test with a good divergent validity correlates minimally with tests that measure other cognitive functions. Moreover, predictive validity (or criterion validity) is related to the degree of correlation between the test score and an external criterion, for example, the correlation between a cognitive test and functional status. […] it should be stressed that cognitive tests alone cannot be used as ultimate proof for organic brain damage, but should be used in combination with more direct measures of cerebral abnormalities, such as neuroimaging.”
“Intelligence is a theoretically ill-defined construct. In general, it refers to the ability to think in an abstract manner and solve new problems. Typically, two forms of intelligence are distinguished, crystallized intelligence (academic skills and knowledge that one has acquired during schooling) and fluid intelligence (the ability to solve new problems). Crystallized intelligence is better preserved in patients with brain disease than fluid intelligence (3). […] From a neuropsychological viewpoint, the concept of intelligence as a unitary construct (often referred to as g-factor) does not provide valuable information, since deficits in specific cognitive functions may be averaged out in the total IQ score. Thus, in most neuropsychological studies, intelligence tests are included because of specific subtests that are assumed to measure specific cognitive functions, and the performance profile is analyzed rather than considering the IQ measure as a compound score in isolation.”
“Attention is a concept that in general relates to the selection of relevant information from our environment and the suppression of irrelevant information (selective or “focused” attention), the ability to shift attention between tasks (divided attention), and to maintain a state of alertness to incoming stimuli over longer periods of time (concentration and vigilance). Many different structures in the human brain are involved in attentional processing and, consequently, disorders in attention occur frequently after brain disease or damage (21). […] Speed of information processing is not a localized cognitive function, but depends greatly on the integrity of the cerebral network as a whole, the subcortical white matter and the interhemispheric and intrahemispheric connections. It is one of the cognitive functions that clearly declines with age and it is highly susceptible to brain disease or dysfunction of any kind.”
“The MiniMental State Examination (MMSE) is a screening instrument that has been developed to determine whether older adults have cognitive impairments […] numerous studies have shown that the MMSE has poor sensitivity and specificity, as well as a low-test–retest reliability […] the MMSE has been developed to determine cognitive decline that is typical for Alzheimer’s dementia, but has been found less useful in determining cognitive decline in nondemented patients (44) or in patients with other forms of dementia. This is important since odds ratios for both vascular dementia and Alzheimer’s dementia are increased in diabetes (45). Notwithstanding this increased risk, most patients with diabetes have subtle cognitive deficits (46, 47) that may easily go undetected using gross screening instruments such as the MMSE. For research in diabetes a high sensitivity is thus especially important. […] ceiling effects in test performance often result in a lack of sensitivity. Subtle impairments are easily missed, resulting in a high proportion of false-negative cases […] In general, tests should be cognitively demanding to avoid ceiling effects in patients with mild cognitive dysfunction.[…] sensitive domains such as speed of information processing, (working) memory, attention, and executive function should be examined thoroughly in diabetes patients, whereas other domains such as language, motor function, and perception are less likely to be affected. Intelligence should always be taken into account, and confounding factors such as mood, emotional distress, and coping are crucial for the interpretation of the neuropsychological test results.”
“The life-time risk of any dementia has been estimated to be more than 1 in 5 for women and 1 in 6 for men (2). Worldwide, about 24 million people have dementia, with 4.6 million new cases of dementia every year (3). […] Dementia can be caused by various underlying diseases, the most common of which is Alzheimer’s disease (AD) accounting for roughly 70% of cases in the elderly. The second most common cause of dementia is vascular dementia (VaD), accounting for 16% of cases. Other, less common, causes include dementia with Lewy bodies (DLB) and frontotemporal lobar degeneration (FTLD). […] It is estimated that both the incidence and the prevalence [of AD] double with every 5-year increase in age. Other risk factors for AD include female sex and vascular risk factors, such as diabetes, hypercholesterolaemia and hypertension […] In contrast with AD, progression of cognitive deficits [in VaD] is mostly stepwise and with an acute or subacute onset. […] it is clear that cerebrovascular disease is one of the major causes of cognitive decline. Vascular risk factors such as diabetes mellitus and hypertension have been recognized as risk factors for VaD […] Although pure vascular dementia is rare, cerebrovascular pathology is frequently observed on MRI and in pathological studies of patients clinically diagnosed with AD […] Evidence exists that AD and cerebrovascular pathology act synergistically (60).”
“In type 1 diabetes the annual prevalence of severe hypoglycemia (requiring help for recovery) is 30–40% while the annual incidence varies depending on the duration of diabetes. In insulin-treated type 2 diabetes, the frequency is lower but increases with duration of insulin therapy. […] In normal health, blood glucose is maintained within a very narrow range […] The functioning of the brain is optimal within this range; cognitive function rapidly becomes impaired when the blood glucose falls below 3.0 mmol/l (54 mg/dl) (3). Similarly, but much less dramatically, cognitive function deteriorates when the brain is exposed to high glucose concentrations” (I did not know the latter for certain, but I certainly have had my suspicions for a long time).
“When exogenous insulin is injected into a non-diabetic adult human, peripheral tissues such as skeletal muscle and adipose tissue rapidly take up glucose, while hepatic glucose output is suppressed. This causes blood glucose to fall and triggers a series of counterregulatory events to counteract the actions of insulin; this prevents a progressive decline in blood glucose and subsequently reverses the hypoglycemia. In people with insulin-treated diabetes, many of the homeostatic mechanisms that regulate blood glucose are either absent or deficient. [If you’re looking for more details on these topics, it should perhaps be noted here that Philip Cryer’s book on these topics is very nice and informative]. […] The initial endocrine response to a fall in blood glucose in non-diabetic humans is the suppression of endogenous insulin secretion. This is followed by the secretion of the principal counterregulatory hormones, glucagon and epinephrine (adrenaline) (5). Cortisol and growth hormone also contribute, but have greater importance in promoting recovery during exposure to prolonged hypoglycemia […] Activation of the peripheral sympathetic nervous system and the adrenal glands provokes the release of a copious quantity of catecholamines, epinephrine, and norepinephrine […] Glucagon is secreted from the alpha cells of the pancreatic islets, apparently in response to localized neuroglycopenia and independent of central neural control. […] The large amounts of catecholamines that are secreted in response to hypoglycemia exert other powerful physiological effects that are unrelated to counterregulation. These include major hemodynamic actions with direct effects on the heart and blood pressure. […] regional blood flow changes occur during hypoglycemia that encourages the transport of substrates to the liver for gluconeogenesis and simultaneously of glucose to the brain. Organs that have no role in the response to acute stress, such as the spleen and kidneys, are temporarily under-perfused. The mobilisation and activation of white blood cells are accompanied by hemorheological effects, promoting increased viscosity, coagulation, and fibrinolysis and may influence endothelial function (6). In normal health these acute physiological changes probably exert no harmful effects, but may acquire pathological significance in people with diabetes of long duration.”
“The more complex and attention-demanding cognitive tasks, and those that require speeded responses are more affected by hypoglycemia than simple tasks or those that do not require any time restraint (3). The overall speed of response of the brain in making decisions is slowed, yet for many tasks, accuracy is preserved at the expense of speed (8, 9). Many aspects of mental performance become impaired when blood glucose falls below 3.0 mmol/l […] Recovery of cognitive function does not occur immediately after the blood glucose returns to normal, but in some cognitive domains may be delayed for 60 min or more (3), which is of practical importance to the performance of tasks that require complex cognitive functions, such as driving. […] [the] major changes that occur during hypoglycemia – counterregulatory hormone secretion, symptom generation, and cognitive dysfunction – occur as components of a hierarchy of responses, each being triggered as the blood glucose falls to its glycemic threshold. […] In nondiabetic individuals, the glycemic thresholds are fixed and reproducible (10), but in people with diabetes, these thresholds are dynamic and plastic, and can be modified by external factors such as glycemic control or exposure to preceding (antecedent) hypoglycemia (11). Changes in the glycemic thresholds for the responses to hypoglycemia underlie the effects of the acquired hypoglycemia syndromes that can develop in people with insulin-treated diabetes […] the incidence of severe hypoglycemia in people with insulin-treated type 2 diabetes increases steadily with duration of insulin therapy […], as pancreatic beta-cell failure develops. The under-recognized risk of severe hypoglycemia in insulin-treated type 2 diabetes is of great practical importance as this group is numerically much larger than people with type 1 diabetes and encompasses many older, and some very elderly, people who may be exposed to much greater danger because they often have co-morbidities such as macrovascular disease, osteoporosis, and general frailty.”
“Hypoglycemia occurs when a mismatch develops between the plasma concentrations of glucose and insulin, particularly when the latter is inappropriately high, which is common during the night. Hypoglycemia can result when too much insulin is injected relative to oral intake of carbohydrate or when a meal is missed or delayed after insulin has been administered. Strenuous exercise can precipitate hypoglycemia through accelerated absorption of insulin and depletion of muscle glycogen stores. Alcohol enhances the risk of prolonged hypoglycemia by inhibiting hepatic gluconeogenesis, but the hypoglycemia may be delayed for several hours. Errors of dosage or timing of insulin administration are common, and there are few conditions where the efficacy of the treatment can be influenced by so many extraneous factors. The time–action profiles of different insulins can be modified by factors such as the ambient temperature or the site and depth of injection and the person with diabetes has to constantly try to balance insulin requirement with diet and exercise. It is therefore not surprising that hypoglycemia occurs so frequently. […] The lower the median blood glucose during the day, the greater the frequency
of symptomatic and biochemical hypoglycemia […] Strict glycemic control can […] induce the acquired hypoglycemia syndromes, impaired awareness of hypoglycemia (a major risk factor for severe hypoglycemia), and counterregulatory hormonal deficiencies (which interfere with blood glucose recovery). […] Severe hypoglycemia is more common at the extremes of age – in very young children and in elderly people. […] In type 1 diabetes the frequency of severe hypoglycemia increases with duration of diabetes (12), while in type 2 diabetes it is associated with increasing duration of insulin treatment (18). […] Around one quarter of all episodes of severe hypoglycemia result in coma […] In 10% of episodes of severe hypoglycemia affecting people with type 1 diabetes and around 30% of those in people with insulin-treated type 2 diabetes, the assistance of the emergency medical services is required (23). However, most episodes (both mild and severe) are treated in the community, and few people require admission to hospital.”
“Severe hypoglycemia is potentially dangerous and has a significant mortality and morbidity, particularly in older people with insulin-treated diabetes who often have premature macrovascular disease. The hemodynamic effects of autonomic stimulation may provoke acute vascular events such as myocardial ischemia and infarction, cardiac failure, cerebral ischemia, and stroke (6). In clinical practice the cardiovascular and cerebrovascular consequences of hypoglycemia are frequently overlooked because the role of hypoglycemia in precipitating the vascular event is missed. […] The profuse secretion of catecholamines in response to hypoglycemia provokes a fall in plasma potassium and causes electrocardiographic (ECG) changes, which in some individuals may provoke a cardiac arrhythmia […]. A possible mechanism that has been observed with ECG recordings during hypoglycemia is prolongation of the QT interval […]. Hypoglycemia-induced arrhythmias during sleep have been implicated as the cause of the “dead in bed” syndrome that is recognized in young people with type 1 diabetes (40). […] Total cerebral blood flow is increased during acute hypoglycemia while regional blood flow within the brain is altered acutely. Blood flow increases in the frontal cortex, presumably as a protective compensatory mechanism to enhance the supply of available glucose to the most vulnerable part of the brain. These regional vascular changes become permanent in people who are exposed to recurrent severe hypoglycemia and in those with impaired awareness of hypoglycemia, and are then present during normoglycemia (41). This probably represents an adaptive response of the brain to recurrent exposure to neuroglycopenia. However, these permanent hypoglycemia-induced changes in regional cerebral blood flow may encourage localized neuronal ischemia, particularly if the cerebral circulation is already compromised by the development of cerebrovascular disease associated with diabetes. […] Hypoglycemia-induced EEG changes can persist for days or become permanent, particularly after recurrent severe hypoglycemia”.
“In the large British Diabetic Association Cohort Study of people who had developed type 1 diabetes before the age of 30, acute metabolic complications of diabetes were the greatest single cause of excess death under the age of 30; hypoglycemia was the cause of death in 18% of males and 6% of females in the 20–49 age group (47).”
“[The] syndromes of counterregulatory hormonal deficiencies and impaired awareness of hypoglycemia (IAH) develop over a period of years and ultimately affect a substantial proportion of people with type 1 diabetes and a lesser number with insulin-treated type 2 diabetes. They are considered to be components of hypoglycemia-associated autonomic failure (HAAF), through down-regulation of the central mechanisms within the brain that would normally activate glucoregulatory responses to hypoglycemia, including the release of counterregulatory hormones and the generation of warning symptoms (48). […] The glucagon secretory response to hypoglycemia becomes diminished or absent within a few years of the onset of insulin-deficient diabetes. With glucagon deficiency alone, blood glucose recovery from hypoglycemia is not noticeably affected because the secretion of epinephrine maintains counterregulation. However, almost half of those who have type 1 diabetes of 20 years duration have evidence of impairment of both glucagon and epinephrine in response to hypoglycemia (49); this seriously delays blood glucose recovery and allows progression to more severe and prolonged hypoglycemia when exposed to low blood glucose. People with type 1 diabetes who have these combined counterregulatory hormonal deficiencies have a 25-fold higher risk of experiencing severe hypoglycemia if they are subjected to intensive insulin therapy compared with those who have lost their glucagon response but have retained epinephrine secretion […] Impaired awareness is not an “all or none” phenomenon. “Partial” impairment of awareness may develop, with the individual being aware of some episodes of hypoglycemia but not others (53). Alternatively, the intensity or number of symptoms may be reduced, and neuroglycopenic symptoms predominate. […] total absence of any symptoms, albeit subtle, is very uncommon […] IAH affects 20–25% of patients with type 1 diabetes (11, 55) and less than 10% with type 2 diabetes (24), becomes more prevalent with increasing duration of diabetes (12) […], and predisposes the patient to a sixfold higher risk of severe hypoglycemia than people who retain normal awareness (56). When IAH is associated with strict glycemic control during intensive insulin therapy or has followed episodes of recurrent severe hypoglycemia, it may be reversible by relaxing glycemic control or by avoiding further hypoglycemia (11), but in many patients with type 1 diabetes of long duration, it appears to be a permanent defect. […] The modern management of diabetes strives to achieve strict glycemic control using intensive therapy to avoid or minimize the long-term complications of diabetes; this strategy tends to increase the risk of hypoglycemia and promotes development of the acquired hypoglycemia syndromes.”
“Up to 15% of patients with unipolar depression eventually commit suicide” (Suicide, depression, and antidepressants)
“A study of 42 adults with AS diagnoses living in the community found that 40 % of subjects had considered committing suicide at some time in the past, and 15 % of respondents reported that they had made at least one attempt to kill themselves (Balfe & Tantam, 2010)”
“The most relevant study was conducted in 2013. It compared 791 children with autism to non-austistic depressed children and typical children. The findings favored a 28-fold increase in suicide behavior in the autism sample compared to the typical children; 10.9 % of children with autism had suicidal ideation and 7.2 % had made attempts”
“…a large proportion of persons with ASD, over 50% in some studies, […] suffer from depression, and we have reports that suicidal ideation is one of the common depressive symptoms leading to this diagnosis.”
“Clinical samples suggest that suicide occurs more frequently in high functioning autism” (Suicide in Autism Spectrum Disorder)
“Compared to those in the general population, individuals with type 1 diabetes (in a British study) had 11 times the suicide rate […] One retrospective outcome study of 160 cases of insulin overdose reported to a regional poison control unit found that nearly 90% were either suicidal or parasuicidal, whereas only 5% of cases were deemed accidental.” (Insulin Overdose Among Patients With Diabetes: A Readily Available Means of Suicide)
“40% of the [diabetic] patients reported that they had felt tired of living and thought that life was not worth living during the last 12 months, and 23% patients admitted to having thought of ending their own life.” (Quality of Life and Suicide Risk in Patients With Diabetes Mellitus)
“The research reviewed indicated that patients with DM-1 are at an increased risk for suicide, although no clear consensus exists regarding the level of the increased risk. […] Our findings support the recommendation that a suicide risk assessment of patients with DM-1 should be part of the routine clinical assessment. The assessment of patients at risk should consist of the evaluation of current and previous suicidal behaviors (both suicidal ideation and attempted suicide).” (Suicide risk in type 1 diabetes mellitus: A systematic review)
“In terms of rates of 1-year prevalence we found that 3.1 % (confidence interval [CI] at 95% = 2.5-3.7) of the sample reported serious thoughts of suicide […] We found significant associations between suicide ideation on the one hand and living alone (OR = 2.5), having no friends (OR = 23) and feeling alone very often (OR= 10.5) on the other hand. […] Twenty-one percent of the individuals who […] felt lonely very often, reported having thought seriously about suicide, in contrast with 2.5% of those who did not.” (from Loneliness in Relation to Suicide Ideation and Parasuicide: A Population-Wide Study)
I’m a depressed, friendless*, lonely, (supposedly ‘high-functioning’) autistic type 1 diabetic. Relatedly, I live in a place people describe this way: “I’ve never been in a country with such a sharp dichotomy between stranger and friend, or one that was so coldly unfriendly to strangers.”
* (…to the long-term readers with a good memory: she at the end got tired of my dissatisfactory behaviour and told me, kindly, to get lost and stop bothering her, and I had no difficulty understanding her decision, which I have thus respected. Incidentally I should probably note that my conceptual model of friendship has changed since I wrote that post, at least partly as a result of increased knowledge about these topics).
Below I have posted a list of the 156 books I read to completion in 2016, as well as links to blog posts covering the books and reviews of the books which I’ve written on goodreads. At the bottom of the post I have also added the 7 books I did not finish this year, as well as some related links and comments. The post you read now is unlikely to be the final edition of this post, as I’ll continue to add links and comments to the post also in 2017 if/when I blog or review books mentioned below.
As I also mentioned earlier in the year, I have been reading a lot of fiction this year and not enough non-fiction. Regarding the ‘technical aspects’ of the list below, as usual the letters ‘f’ and ‘nf.’ in the parentheses correspond to ‘fiction’ and ‘non-fiction’, respectively, whereas the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.
I did a brief count of the books on the list and concluded that the list includes 30 books categorized as non-fiction, 20 books in the miscellaneous category, and 106 books categorized as fiction. As usual non-fiction works published by Springer make up a substantial proportion of the non-fiction books I read (20 %), with another 20 % accounted for by Oxford University Press, Princeton University Press and Wiley/Wiley-Blackwell. Some of the authors in the fiction category have also featured on the lists previously (Christie, Wodehouse, Bryson), but other names are new – new names include: Dick Francis (39 books), Tom Sharpe (16 books), David Sedaris (7 books), Mario Puzo (4 books), Gerard Durrell (3 books), and Connie Willis (3 books).
I shared my ‘year in books’ on goodreads, and that link includes a few summary stats as well as cover images of the books (annoyingly a large-ish proportion of the non-fiction books have not added cover pictures, but it’s even so a neat visualization tool). With 156 books finished this year I read almost exactly 3 books per week on average, and the goodreads tools also tell me that I read 47.281 pages during the year. As I don’t believe goodreads includes the page counts of partially read books in that tool, this is probably a slight underestimate but it’s in that neighbourhood anyway; this corresponds to ~130 pages per day on average (129,5) throughout the year, or roughly 900 pages per week. The average length of the books I finished was 309 pages, again according to goodreads.
Since I started blogging, I have published roughly 500 posts about books I’ve read – I actually realized while writing this post that the next post I publish on this site categorized under ‘books’ will be post number 500 in that category. As should be obvious from the list below, as a rule I do not cover fiction books on this blog, aside from in the context of quote posts where I may occasionally include a few quotes from books I’ve read (I decided early on not to include links to such posts on lists like these, as that would be too much work). In the context of quotes I should probably add here to readers not already aware of this that I recently decided to move/copy a large number of quotes from this site to goodreads, and that I now update my goodreads quote collection more frequently than I do the quote collection on this blog; at this point, my quote collection on goodreads includes 1347 quotes. For a few more details about this aspect of the goodreads site, see incidentally this post.
Both Dick Francis and Connie Willis were introduced to me by the SSC commentariat and this link includes a lot of other author recommendations which might be of interest to you. I should perhaps also note before moving on to the list that I have recently added a not-insignificant number of books to my list of favourite books on goodreads. I have (retrospectively) slightly modified my implicit selection criteria for adding books to the list; previously if a book had taught me a lot but I did not give it a five star rating or I figured it wasn’t at least very close to perfect, it wasn’t going to get anywhere near my list of favourite books. I figured recently that perhaps I should also include on the list books which had taught me a lot, books that had changed my way of looking at the world, even if they were not very close to perfect in most respects. I’m still not quite sure what is the best categorization approach, but as of now the list includes some books which did not feature on the list in the near past and I figured I might mention the list explicitly here also because people perusing a list like the one below are presumably in part doing it because they’re looking for good books to read, and my inclusion of a book on that list can still at least be taken to be a qualified recommendation of the book.
1. 4.50 from Paddington (4, f). Agatha Christie.
3. Hickory Dickory Dock (3, f). Agatha Christie.
6. A Caribbean Mystery (3, f). Agatha Christie.
7. A Rulebook for Arguments (Hackett Student Handbooks) (1, nf. Hackett Publishing). Very short goodreads review here.
8. The Clocks (2, f). Agatha Christie.
15. By the Pricking of My Thumbs (2, f). Agatha Christie.
16. The Godfather (4, f). Mario Puzo.
21. A Few Quick Ones (3, f). P. G. Wodehouse.
22. Ice in the Bedroom (4, f). P. G. Wodehouse.
24. The Secret of Chimneys (2, f). Agatha Christie.
26. Something Fishy (3, f). P. G. Wodehouse.
27. Do Butlers Burgle Banks? (3,f). P.G. Wodehouse.
28. The Mirror Crack’d from Side to Side (1, f). Agatha Christie. Boring story, almost didn’t finish it.
29. Frozen Assets (4, f). P. G. Wodehouse.
30. A Cooperative Species: Human Reciprocity and Its Evolution (5, nf. Princeton University Press). Goodreads review here. Blog coverage here.
31. If I Were You (4, f). P. G. Wodehouse.
32. On the Shortness of Life (nf.). Seneca the Younger.
33. Barmy in Wonderland (3, f). P. G. Wodehouse.
38. Company for Henry (4, f). P. G. Wodehouse.
39. Bachelors Anonymous (5, f). P. G. Wodehouse. A short book, but very funny.
40. The Second World War (5, nf.) Winston Churchill. Very long, the book is a thousand pages long abridgement of 6 different volumes written by Churchill. Blog coverage here, here, here, and here. I added this book to my list of favourite books on goodreads.
41. The Old Reliable (3, f). P. G. Wodehouse.
42. Performing Flea (4, m). P. G. Wodehouse, William Townend.
45. The Road to Little Dribbling: Adventures of an American in Britain (3, m). Bill Bryson.
46. Bryson’s Dictionary of Troublesome Words: A Writer’s Guide to Getting It Right (3, nf.). Bill Bryson. Goodreads review here.
48. Shakespeare: The World as Stage (2, m). Bill Bryson.
50. The Sicilian (3, f). Mario Puzo.
53. Pre-Industrial Societies: Anatomy of the Pre-Modern World (5, nf. Oneworld Publications). Goodreads review here. I added this book to my list of favourite books on goodreads.
56. Aunts Aren’t Gentlemen (3, f). P. G. Wodehouse.
57. What If?: Serious Scientific Answers to Absurd Hypothetical Questions (2, m. Randall Munroe). Short goodreads review here.
61. Wilt In Nowhere (3, f). Tom Sharpe.
63. Monstrous Regiment (3, f). Terry Pratchett.
67. Vintage Stuff (2, f). Tom Sharpe.
70. Suicide Prevention and New Technologies: Evidence Based Practice (1, nf. Palgrave Macmillan). Long(-ish) goodreads review here.
72. Diabetes and the Metabolic Syndrome in Mental Health (2, nf. Lippincott Williams & Wilkins). Goodreads review here. Blog coverage here and here.
77. The Great Pursuit (3, f). Tom Sharpe.
78. Riotous Assembly (4, f). Tom Sharpe.
79. Indecent Exposure (3, f). Tom Sharpe.
87. Naked (3, m). David Sedaris.
91. When You Are Engulfed in Flames (3, m). David Sedaris.
93. Poor Richard’s Almanack (m). Benjamin Franklin.
97. The Garden of the Gods (3, m). Gerard Durrell.
100. The Thirteen Problems (2, f). Agatha Christie.
101. Dead Cert (4, f). Dick Francis.
102. Nerve (3, f). Dick Francis.
103. For Kicks (3, f). Dick Francis.
104. Odds Against (3, f). Dick Francis.
108. Blood Sport (3, f). Dick Francis.
110. Forfeit (2, f). Dick Francis.
117. High Stakes (4, f). Dick Francis.
118. In the Frame (3, f). Dick Francis.
119. Knockdown (3, f). Dick Francis.
121. Managing Diabetic Nephropathies in Clinical Practice (4, nf. Springer). Very short goodreads review here. Blog coverage here.
129. Proof (2, f). Dick Francis.
130. Break In (3, f). Dick Francis.
131. Integrated Diabetes Care: A Multidisciplinary Approach (4, nf. Springer). Goodreads review here. Blog coverage here and here.
136. Longshot (4, f). Dick Francis.
141. Decider (3, f). Dick Francis.
142. Essential Microbiology and Hygiene for Food Professionals (2, nf. CRC Press). Short goodreads review here.
143. Wild Horses (2, f). Dick Francis.
144. Come to Grief (4, f). Dick Francis.
145. To the Hilt (2, f). Dick Francis.
147. Second Wind (2, f). Dick Francis.
149. Under Orders (4, f). Dick Francis.
Books I did not finish:
Raising Steam (?, f). Terry Pratchett. These days I mostly use Pratchett’s books as a treat, the few remaining books in the Discworld series which I have yet to read I consider to be books which I feel that I have to make myself deserve to be allowed to read. I started out reading this book because I felt terrible at the time, but I decided after having read a hundred pages or so that I had not in fact deserved to read the book, and so I put it away again. Unlike the two books above I do not consider this book to be bad, that’s not why I didn’t finish it.
Anna Karenina (?, f). Tolstoy. As I pointed out in my short review, “so far (I stopped around page 140) it’s been a story about miserable Russians, and I can’t read that kind of stuff right now.” Again, I would not say this book is bad, but I could not read that kind of stuff at the time.
The Language Instinct: How the Mind Creates Language (nf., Harper Perennial Modern Classics). Pinker’s book may be one of the last popular science books I’ll read, at least for a while – I find that I simply can’t read this kind of book anymore (which is annoying, because I also bought Jonathan Haidt’s The Righteous Mind this year, and I worry that I’ll never be able to read that book, despite the content being at least somewhat interesting, simply on account of the way the book is likely to be written). As I noted while reading the book, “I’ve realized by now that I’ve probably at this point grown to strongly dislike reading popular science books. I’ve disliked other PS books I’ve read in the semi-near past as well, but I always figured I had specific reasons for disliking a particular book. At this point it seems like it’s a general thing. I don’t like these books any more. Too imprecise language, claims are consistently way too strong, etc., etc..” My reading experience of Pinker’s book was definitely not improved by the fact that I have read textbooks on topics closely related to those covered in the book in the past (Eysenck and Keane, Snowling et al.).
Physiology at a Glance (?, Wiley-Blackwell). ‘Too much work, considering the pay-off’, would probably be the short version of why I didn’t finish this one – but this should not be taken as an indication that the book is bad. Despite the words ‘at a glance’ in the title, each short chapter (2 pages) in this book roughly matches the amount of material usually covered in an academic lecture (this is the general structure of the ‘at a glance books’), which means that the book takes quite a bit more work than the limited page count might indicate. The fact that I knew many of the things covered didn’t mean that the book was much faster to read than it otherwise might have been; it still took a lot of time and effort to digest the material. I’m sure there’s some stuff in the book which I don’t know and stuff I’ve forgot, and I did learn some new stuff from the chapters I did read, so I’m conflicted about whether or not to pick it up again later – it may be worth it at some point. However back when I was reading it I decided in the end to just put the book away and read something else instead. If you’re looking for a dense and to-the-point introduction to physiology/anatomy, I’m sure you could do a lot worse than this book.
100 Endgames You Must Know: Vital Lessons for Every Chess Player (?, nf. New in Chess). If I just wanted to be able to say that I had ‘read’ this book, I would have finished it a long time ago, but this is not the sort of book you just ‘read’. The positions covered need to be studied and analyzed in detail, the positions need to be played out, perhaps reviewed (depending on how ambitious you are about your chess). I’m more than half-way through (p. 140 or so), but I rarely feel like working on this stuff as it’s more fun to play chess than to systematically improve your chess in the manner you’ll do if you work on the material covered in this book. It’s a great endgame book, but it takes a lot of work.
Here’s my first post about the book, which I recently finished – here’s my goodreads review. I added the book to my list of favourite books on goodreads, it’s a great textbook. Below some observations from the first few chapters of the book.
“Several studies report T1D [type 1 diabetes] incidence numbers of 0.1–36.8/100,000 subjects worldwide (2). Above the age of 15 years ketoacidosis at presentation occurs on average in 10% of the population; in children ketoacidosis at presentation is more frequent (3, 4). Overall, publications report a male predominance (1.8 male/female ratio) and a seasonal pattern with higher incidence in November through March in European countries. Worldwide, the incidence of T1D is higher in more developed countries […] After asthma, T1D is a leading cause of chronic disease in children. […] twin studies show a low concordant prevalence of T1D of only 30–55%. […] Diabetes mellitus type 1 may be sporadic or associated with other autoimmune diseases […] The latter has been classified as autoimmune polyglandular syndrome type II (APS-II). APS-II is a polygenic disorder with a female preponderance which typically occurs between the ages of 20 and 40 years […] In clinical practice, anti-thyroxine peroxidase (TPO) positive hypothyroidism is the most frequent concomitant autoimmune disease in type 1 diabetic patients, therefore all type 1 diabetic patients should annually be screened for the presence of anti-TPO antibodies. Other frequently associated disorders are atrophic gastritis leading to vitamin B12 deficiency (pernicious anemia) and vitiligo. […] The normal human pancreas contains a superfluous amount of β-cells. In T1D, β-cell destruction therefore remains asymptomatic until a critical β-cell reserve is left. This destructive process takes months to years […] Only in a minority of type 1 diabetic patients does the disease begin with diabetic ketoacidosis, the majority presents with a milder course that may be mistaken as type 2 diabetes (7).”
“Insulin is the main regulator of glucose metabolism by stimulating glucose uptake in tissues and glycogen storage in liver and muscle and by inhibiting gluconeogenesis in the liver (11). Moreover, insulin is a growth factor for cells and cell differentiation, and acting as anabolic hormone insulin stimulates lipogenesis and protein synthesis. Glucagon is the counterpart of insulin and is secreted by the α-cells in the pancreatic islets in an inversely proportional quantity to the insulin concentration. Glucagon, being a catabolic hormone, stimulates glycolysis and gluconeogenesis in the liver as well as lipolysis and uptake of amino acids in the liver. Epinephrine and norepinephrine have comparable catabolic effects […] T1D patients lose the glucagon response to hypoglycemia after several years, when all β-cells are destructed […] The risk of hypoglycemia increases with improved glycemic control, autonomic neuropathy, longer duration of diabetes, and the presence of long-term complications (17) […] Long-term complications are prevalent in any population of type 1 diabetic patients with increasing prevalence and severity in relation to disease duration […] The pathogenesis of diabetic complications is multifactorial, complicated, and not yet fully elucidated.”
“Cataract is much more frequent in patients with diabetes and tends to become clinically significant at a younger age. Glaucoma is markedly increased in diabetes too.” (I was unaware of this).
“T1D should be considered as an independent risk factor for atherosclerosis […] An older study shows that the cumulative mortality of coronary heart disease in T1D was 35% by the age 55 (34). In comparison, the Framingham Heart Study showed a cardiovascular mortality of 8% of men and 4% of women without diabetes, respectively. […] Atherosclerosis is basically a systemic disease. Patients with one clinically apparent localization are at risk for other manifestations. […] Musculoskeletal disease in diabetes is best viewed as a systemic disorder with involvement of connective tissue. Potential pathophysiological mechanisms that play a role are glycosylation of collagen, abnormal cross-linking of collagen, and increased collagen hydration […] Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D. Dupuytren’s is characterized by thickening of the palmar fascia due to fibrosis with nodule formation and contracture, leading to flexion contractures of the digits, most commonly affecting the fourth and fifth digits. […] Foot problems in diabetes are common and comprise ulceration, infection, and gangrene […] The lifetime risk of a foot ulcer for diabetic patients is about 15% (42). […] Wound depth is an important determinant of outcome (46, 47). Deep ulcers with cellulitis or abscess formation often involve osteomyelitis. […] Radiologic changes occur late in the course of osteomyelitis and negative radiographs certainly do not exclude it.”
“Education of people with diabetes is a comprehensive task and involves teamwork by a team that comprises at least a nurse educator, a dietician, and a physician. It is, however, essential that individuals with diabetes assume an active role in their care themselves, since appropriate self-care behavior is the cornerstone of the treatment of diabetes.” (for much more on these topics, see Simmons et al.)
“The International Diabetes Federation estimates that more than 245 million people around the world have diabetes (4). This total is expected to rise to 380 million within 20 years. Each year a further 7 million people develop diabetes. Diabetes, mostly type 2 diabetes (T2D), now affects 5.9% of the world’s adult population with almost 80% of the total in developing countries. […] According to […] 2007 prevalence data […] [a]lmost 25% of the population aged 60 years and older had diabetes in 2007. […] It has been projected that one in three Americans born in 2000 will develop diabetes, with the highest estimated lifetime risk among Latinos (males, 45.4% and females, 52.5%) (6). A rise in obesity rates is to blame for much of the increase in T2D (7). Nearly two-thirds of American adults are overweight or obese (8). [my bold, US]
“In the natural history of progression to diabetes, β-cells initially increase insulin secretion in response to insulin resistance and, for a period of time, are able to effectively maintain glucose levels below the diabetic range. However, when β-cell function begins to decline, insulin production is inadequate to overcome the insulin resistance, and blood glucose levels rise. […] Insulin resistance, once established, remains relatively stable over time. […] progression of T2D is a result of worsening β-cell function with pre-existing insulin resistance.”
“Lifestyle modification (i.e., weight loss through diet and increased physical activity) has proven effective in reducing incident T2D in high-risk groups. The Da Qing Study (China) randomly allocated 33 clinics (557 persons with IGT) to 1 of 4 study conditions: control, diet, exercise, or diet plus exercise (23). Compared with the control group, the incidence of diabetes was reduced in the three intervention groups by 31, 46, and 42%, respectively […] The Finnish Diabetes Prevention Study evaluated 522 obese persons with IGT randomly allocated on an individual basis to a control group or a lifestyle intervention group […] During the trial, the incidence of diabetes was reduced by 58% in the lifestyle group compared with the control group. The US Diabetes Prevention Program is the largest trial of primary prevention of diabetes to date and was conducted at 27 clinical centers with 3,234 overweight and obese participants with IGT randomly allocated to 1 of 3 study conditions: control, use of metformin, or intensive lifestyle intervention […] Over 3 years, the incidence of diabetes was reduced by 31% in the metformin group and by 58% in the lifestyle group; the latter value is identical to that observed in the Finnish Study. […] Metformin is recommended as first choice for pharmacologic treatment [of type 2 diabetes] and has good efficacy to lower HbA1c […] However, most patients will eventually require treatment with combinations of oral medications with different mechanisms of action simultaneously in order to attain adequate glycemic control.”
CVD [cardiovascular disease, US] is the cause of 65% of deaths in patients with T2D (31). Epidemiologic studies have shown that the risk of a myocardial infarction (MI) or CVD death in a diabetic individual with no prior history of CVD is comparable to that of an individual who has had a previous MI (32, 33). […] Stroke is the second leading cause of long-term disability in high-income countries and the second leading cause of death worldwide. […] Stroke incidence is highly age-dependent. The median stroke incidence in persons between 15 and 49 years of age is 10 per 100,000 per year, whereas this is 2,000 per 100,000 for persons aged 85 years or older. […] In Western communities, about 80% of strokes are caused by focal cerebral ischemia, secondary to arterial occlusion, 15% by intracerebral hemorrhage, and 5% by subarachnoid hemorrhage (2). […] Patients with ischemic stroke usually present with focal neurological deficit of sudden onset. […] Common deficits include dysphasia, dysarthria, hemianopia, weakness, ataxia, sensory loss, and cognitive disorders such as spatial neglect […] Mild-to-moderate headache is an accompanying symptom in about a quarter of all patients with ischemic stroke […] The risk of symptomatic intracranial hemorrhage after thrombolysis is higher with more severe strokes and higher age (21). [worth keeping in mind when in the ‘I-am-angry-and-need-someone-to-blame-for-the-death-of-individual-X-phase’ – if the individual died as a result of the treatment, the prognosis was probably never very good to start with..] […] Thirty-day case fatality rates for ischemic stroke in Western communities generally range between 10 and 17% (2). Stroke outcome strongly depends not only on age and comorbidity, but also on the type and cause of the infarct. Early case fatality can be as low as 2.5% in patients with lacunar infarcts (7) and as high as 78% in patients with space-occupying hemispheric infarction (8).”
“In the previous 20 years, ten thousands of patients with acute ischemic stroke have participated in hundreds of clinical trials of putative neuroprotective therapies. Despite this enormous effort, there is no evidence of benefit of a single neuroprotective agent in humans, whereas over 500 have been effective in animal models […] the failure of neuroprotective agents in the clinic may […] be explained by the fact that most neuroprotectants inhibit only a single step in the broad cascade of events that lead to cell death (9). Currently, there is no rationale for the use of any neuroprotective medication in patients with acute ischemic stroke.”
“Between 5 and 10% of patients with ischemic stroke suffer from epileptic seizures in the first week and about 3% within the first 24 h […] Post-stroke seizures are not associated with a higher mortality […] About 1 out of every 11 patient with an early epileptic seizure develops epilepsy within 10 years after stroke onset (51) […] In the first 12 h after stroke onset, plasma glucose concentrations are elevated in up to 68% of patients, of whom more than half are not known to have diabetes mellitus (53). An initially high blood glucose concentration in patients with acute stroke is a predictor of poor outcome (53, 54). […] Acute stroke is associated with a blood pressure higher than 170/110 mmHg in about two thirds of patients. Blood pressure falls spontaneously in the majority of patients during the first week after stroke. High blood pressure during the acute phase of stroke has been associated with a poor outcome (56). It is unclear how blood pressure should be managed during the acute phase of ischemic stroke. […] routine lowering of the blood pressure is not recommended in the first week after stroke, except for extremely elevated values on repeated measurements […] Urinary incontinence affects up to 60% of stroke patients admitted to hospital, with 25% still having problems on hospital discharge, and around 15% remaining incontinent at 1 year. […] Between 22 and 43% of patients develop fever or subfebrile temperatures during the first days after stroke […] High body temperature in the first days after stroke is associated with poor outcome (42, 67). There is currently no evidence from randomized trials to support the routine lowering of body temperature above 37◦C.”
i. “To be good and lead a good life means to give to others more than one takes from them.” (Leo Tolstoy)
ii. “If you tell the truth, you don’t have to remember anything.” (Mark Twain)
iii. “When we cannot obtain a thing, we comfort ourselves with the reassuring thought that it is not worth nearly as much as we believed.” (Max Scheler)
iv. “Few persons are prevented from thinking themselves right by the reflection that, if they be right, the rest of the world is wrong.” (Arthur James Balfour)
v. “Misery loves company, but company does not reciprocate.” (Addison Mizner)
vi. “It is characteristic of the unlearned that they are forever proposing something which is old, and because it has recently come to their own attention, supposing it to be new.” (Calvin Coolidge)
vii. “To be wicked is never excusable, but there is some merit in knowing that you are; the most irreparable of vices is to do evil from stupidity.” (Charles Baudelaire)
viii. “A demagogue is a person with whom we disagree as to which gang should mismanage the country.” (Donald Robert Perry Marquis)
ix. “The usual judgments are judgments of interest and they tell us less about the nature of the person judged than about the interest of the one who judges.” (Constantin Brunner)
x. “Men are forever doing two things at the same time: acting egoistically and talking moralistically.” (-ll-)
xi. “I’m not young enough to know everything.” (J. M. Barrie)
xii. “History repeats itself. That’s one of the things wrong with history.” (Clarence Darrow)
xiii. “People hate the man who is a constant drain on their sympathy.” (E. W. Howe)
xiv. “Abusing the prosperous in order to curry the favor of the envious, is an old game that still works better than it should.” (-ll-)
xv. “The world is full of people whose notion of a satisfactory future is, in fact, a return to an idealised past.” (Robertson Davies)
xvi. “When a man talks with absolute sincerity and freedom he goes on a voyage of discovery. The whole company has shares in the enterprise.” (John Jay Chapman)
xvii. “Be less curious about people and more curious about ideas.” (Marie Curie)
xviii. “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” (-ll-)
xix. “If people were always kind and obedient to those who are cruel and unjust; the wicked people would have it all their own way: they would never feel afraid, and so they would never alter, but would grow worse and worse. When we are struck at without a reason, we should strike back again very hard; I am sure we should — so hard as to teach the person who struck us never to do it again.” (Charlotte Brontë)
xx. “Truth disdains the aid of the law for its defence – it will stand upon its own merit.” (John Leland)
I recently learned that the probability that I have brain-damage as a result of my diabetes is higher than I thought it was.
I first took note of the fact that there might be a link between diabetes and brain development some years ago, but this is a topic I knew very little about before reading the book I’m currently reading. Below I have added some relevant quotes from chapters 10 and 11 of the book:
“Cognitive decrements [in adults with type 1 diabetes] are limited to only some cognitive domains and can best be characterised as a slowing of mental speed and a diminished mental flexibility, whereas learning and memory are generally spared. […] the cognitive decrements are mild in magnitude […] and seem neither to be progressive over time, nor to be substantially worse in older adults. […] neuroimaging studies […] suggest that type 1 diabetic patients have relatively subtle reductions in brain volume but these structural changes may be more pronounced in patients with an early disease onset.”
“With the rise of the subspecialty area ‘medical neuropsychology’ […] it has become apparent that many medical conditions may […] affect the structure and function of the central nervous system (CNS). Diabetes mellitus has received much attention in that regard, and there is now an extensive literature demonstrating that adults with type 1 diabetes have an elevated risk of CNS anomalies. This literature is no longer limited to small cross-sectional studies in relatively selected populations of young adults with type 1 diabetes, but now includes studies that investigated the pattern and magnitude of neuropsychological decrements and the associated neuroradiological changes in much more detail, with more sensitive measurements, in both younger and older patients.”
“Compared to non-diabetic controls, the type 1 diabetic group [in a meta-analysis including 33 studies] demonstrated a significant overall lowered performance, as well as impairment in the cognitive domains intelligence, implicit memory, speed of information processing, psychomotor efficiency, visual and sustained attention, cognitive flexibility, and visual perception. There was no difference in explicit memory, motor speed, selective attention, or language function. […] These results strongly support the hypothesis that there is a relationship between cognitive dysfunction and type 1 diabetes. Clearly, there is a modest, but statistically significant, lowered cognitive performance in patients with type 1 diabetes compared to non-diabetic controls. The pattern of cognitive findings does not suggest decline in all cognitive domains, but is characterised by a slowing of mental speed and a diminished mental flexibility. Patients with type 1 diabetes seem to be less able to flexibly apply acquired knowledge in a new situation. […] In all, the cognitive problems we see in type 1 diabetes mimics the patterns of cognitive ageing. […] One of the problems with much of this research is that it is conducted in patients who are seen in specialised medical centres where care is very good. Other aspects of population selection may also have affected the results. Persons who participate in research projects that include a detailed work-up at a hospital tend to be less affected than persons who refuse participation. Possibly, specific studies that recruit type 1 adults from the community, with individuals being in poorer health, would result in greater cognitive deficits”.
“[N]eurocognitive research suggests that type 1 diabetes is primarily associated with psychomotor slowing and reductions in mental efficiency. This pattern is more consistent with damage to the brain’s white matter than with grey-matter abnormalities. […] A very large neuroimaging literature indicates that adults with either type 1 or type 2 diabetes manifest structural changes in a number of brain regions […]. MRI changes in the brain of patients with type 1 diabetes are relatively subtle. In terms of effect sizes, these are at best large enough to distinguish the patient group from the control group, but not large enough to classify an individual subject as being patient or control.”
“[T]he subtle cognitive decrements in speed of information processing and mental flexibility found in diabetic patients are not merely caused by acute metabolic derangements or psychological factors, but point to end-organ damage in the central nervous system. Although some uncertainty remains about the exact pathogenesis, several mechanisms through which diabetes may affect the brain have now been identified […] The issue whether or not repeated episodes of severe hypoglycaemia result in permanent mild cognitive impairment has been debated extensively in the literature. […] The meta-analysis on the effect of type 1 diabetes on cognition (1) does not support the idea that there are important negative effects from recurrent episodes of severe hypoglycaemia on cognitive functioning, and large prospective studies did not confirm the earlier observations […] there is no evidence for a linear relationship between recurrent episodes of hypoglycaemia and permanent brain dysfunction in adults. […] Cerebral microvascular pathology in diabetes may result in a decrease of regional cerebral blood flow and an alteration in cerebral metabolism, which could partly explain the occurrence of cognitive impairments. It could be hypothesised that vascular pathology disrupts white-matter integrity in a way that is akin to what one sees in peripheral neuropathy and as such could perhaps affect the integrity of neurotransmitter systems and as a consequence limits cognitive efficiency. These effects are likely to occur diffusely across the brain. Indeed, this is in line with MRI findings and other reports.”
“[An] important issue is the interaction between different disease variables. In particular, patients with diabetes onset before the age of 5 […] and patients with advanced microangiopathy might be more sensitive to the effects of hypoglycaemic episodes or elevated HbA1c levels. […] decrements in cognitive function have been observed as early as 2 years after the diagnosis (63). It is important to consider the possibility that the developing brain is more vulnerable to the effect of diabetes […] Diabetes has a marked effect on brain function and structure in children and adolescents. As a group, diabetic children are more likely to perform more poorly than their nondiabetic peers in the classroom and earn lower scores on measures of academic achievement and verbal intelligence. Specialized neuropsychological testing reveals evidence of dysfunction in a variety of cognitive domains, including sustained attention, visuoperceptual skills, and psychomotor speed. Children diagnosed early in life – before 7 years of age – appear to be most vulnerable, showing impairments on virtually all types of cognitive tests, with learning and memory skills being particularly affected. Results from neurophysiological, cerebrovascular, and neuroimaging studies also show evidence of CNS anomalies. Earlier research attributed diabetes-associated brain dysfunction to episodes of recurrent hypoglycemia, but more recent studies have generally failed to find strong support for that view.”
“[M]ethodological issues notwithstanding, extant research on diabetic children’s brain function has identified a number of themes […]. All other things being equal, children diagnosed with type 1 diabetes early in life – within the first 5–7 years of age – have the greatest risk of manifesting neurocognitive dysfunction, the magnitude of which is greater than that seen in children with a later onset of diabetes. The development of brain dysfunction seems to occur within a relatively brief period of time, often appearing within the first 2–3 years following diagnosis. It is not limited to performance on neuropsychological tests, but is manifested on a wide range of electrophysiological measures as marked neural slowing. Somewhat surprisingly, the magnitude of these effects does not seem to worsen appreciably with increasing duration of diabetes – at least through early adulthood. […] As a group, diabetic children earn somewhat lower grades in school as compared to their nondiabetic classmates, are more likely to fail or repeat a grade, perform more poorly on formal tests of academic achievement, and have lower IQ scores, particularly on tests of verbal intelligence.”
“The most compelling evidence for a link between diabetes and poorer school outcomes has been provided by a Swedish population-based register study involving 5,159 children who developed diabetes between July 1997 and July 2000 and 1,330,968 nondiabetic children […] Those who developed diabetes very early in life (diagnosis before 2 years of age) had a significantly increased risk of not completing school as compared to either diabetic patients diagnosed after that age or to the reference population. Small, albeit statistically reliable between-group differences were noted in school marks, with diabetic children, regardless of age at diagnosis, consistently earning somewhat lower grades. Of note is their finding that the diabetic sample had a significantly lower likelihood of getting a high mark (passed with distinction or excellence) in two subjects and was less likely to take more advanced courses. The authors conclude that despite universal access to active diabetes care, diabetic children – particularly those with a very early disease onset – had a greatly increased risk of somewhat lower educational achievement […] Similar results have been reported by a number of smaller studies […] in the prospective Melbourne Royal Children’s Hospital (RCH) cohort study (22), […] only 68% of [the] diabetic sample completed 12 years of school, as compared to 85% of the nondiabetic comparison group […] Children with diabetes, especially those with an earlier onset, have also been found to require more remedial educational services and to be more likely to repeat a grade (25–28), to earn lower school grades over time (29), to experience somewhat greater school absenteeism (28, 30–32), to have a two to threefold increase in rates of depression (33– 35), and to manifest more externalizing behavior problems (25).”
“Children with diabetes have a greatly increased risk of manifesting mild neurocognitive dysfunction. This is an incontrovertible fact that has emerged from a large body of research conducted over the past 60 years […]. There is, however, less agreement about the details. […] On standardized tests of academic achievement, diabetic children generally perform somewhat worse than their healthy peers […] Performance on measures of verbal intelligence – particularly those that assess vocabulary knowledge and general information about the world – is frequently compromised in diabetic children (9, 14, 26, 40) and in adults (41) with a childhood onset of diabetes. The few studies that have followed subjects over time have noted that verbal IQ scores tend to decline as the duration of diabetes increases (13, 15, 29). These effects appear to be more pronounced in boys and in those children with an earlier onset of diabetes. Whether this phenomenon is a marker of cognitive decline or whether it reflects a delay in cognitive development cannot yet be determined […] it is possible, but remains unproven, that psychosocial processes (e.g., school absence, depression, distress, externalizing problems) (42), and/or multiple and prolonged periods of classroom inattention and reduced motivation secondary to acute and prolonged episodes of hypoglycemia (43–45) may be contributing to the poor academic outcomes characteristic of children with diabetes. Although it may seem more reasonable to attribute poorer school performance and lower IQ scores to diabetes-associated disruption of specific neurocognitive processes (e.g., attention, learning, memory) secondary to brain dysfunction, there is little compelling evidence to support that possibility at the present time.”
“Children and adults who develop diabetes within the first 5–7 years of life may show moderate cognitive dysfunction that can affect all cognitive domains, although the specific pattern varies, depending both on the cognitive domain assessed and on the child’s age at assessment. Data from a recent meta-analysis of 19 pediatric studies have indicated that effect sizes tend to range between ∼ 0.4 and 0.5 for measures of learning, memory, and attention, but are lower for other cognitive domains (47). For the younger child with an early onset of diabetes, decrements are particularly pronounced on visuospatial tasks that require copying complex designs, solving jigsaw puzzles, or using multi-colored blocks to reproduce designs, with girls more likely to earn lower scores than boys (8). By adolescence and early adulthood, gender differences are less apparent and deficits occur on measures of attention, mental efficiency, learning, memory, eye–hand coordination, and “executive functioning” (13, 26, 40, 48–50). Not only do children with an early onset of diabetes often – but not invariably – score lower than healthy comparison subjects, but a subset earn scores that fall into the “clinically impaired” range […]. According to one estimate, the prevalence of clinically significant impairment is approximately four times higher in those diagnosed within the first 6 years of life as compared to either those diagnosed after that age or to nondiabetic peers (25 vs. 6%) (49). Nevertheless, it is important to keep in mind that not all early onset diabetic children show cognitive dysfunction, and not all tests within a particular cognitive domain differentiate diabetic from nondiabetic subjects.”
“Slowed neural activity, measured at rest by electroencephalogram (EEG) and in response to sensory stimuli, is common in children with diabetes. On tests of auditory- or visual-evoked potentials (AEP; VEP), children and adolescents with more than a 2-year history of diabetes show significant slowing […] EEG recordings have also demonstrated abnormalities in diabetic adolescents in very good metabolic control. […] EEG abnormalities have also been associated with childhood diabetes. One large study noted that 26% of their diabetic subjects had abnormal EEG recordings, as compared to 7% of healthy controls […] diabetic children with EEG abnormalities recorded at diagnosis may be more likely to experience a seizure or coma (i.e., a severe hypoglycemic event) when blood glucose levels subsequently fall […] This intriguing possibility – that seizures occur in some diabetic children during hypoglycemia because of the presence of pre-existing brain dysfunction – requires further study.”
“A very large body of research on adults with diabetes now demonstrates that the risk of developing a wide range of neurocognitive changes – poorer cognitive function, slower neural functioning, abnormalities in cerebral blood flow and brain metabolites, and reductions or alterations in gray and white-brain matter – is associated with chronically elevated blood glucose values […] Taken together, the limited animal research on this topic […] provides quite compelling support for the view that even relatively brief bouts of chronically elevated blood glucose values can induce structural and functional changes to the brain. […] [One pathophysiological model proposed is] the “diathesis” or vulnerability model […] According to this model, in the very young child diagnosed with diabetes, chronically elevated blood glucose levels interfere with normal brain maturation at a time when those neurodevelopmental processes are particularly labile, as they are during the first 5–7 years of life […]. The resulting alterations in brain organization that occur during this “sensitive period” will not only lead to delayed cognitive development and lasting cognitive dysfunction, but may also induce a predisposition or diathesis that increases the individual’s sensitivity to subsequent insults to the brain, as could be initiated by the prolonged neuroglycopenia that occurs during an episode of hypoglycemia. Data from most, but not all, research are consistent with that view. […] Research is only now beginning to focus on plausible pathophysiological mechanisms.”
After having read these chapters, I’m now sort-of-kind-of wondering to which extent my autism was/is also at least partly diabetes-mediated. There’s no evidence linking autism and diabetes presented in the chapters, but you do start to wonder even so – the central nervous system is complicated.. If diabetes did play a role there, that would probably be an argument for not considering potential diabetes-mediated brain changes in me as ‘minor’ despite my somewhat higher than average IQ (just to be clear, a high observed IQ in an individual does not preclude the possibility that diabetes had a negative IQ-effect – we don’t observe the counterfactual – but a high observed IQ does make a potential IQ-lowering effect less likely to have happened, all else equal).
Some stuff from the chapters dealing with the UK:
“we now know that reducing the HbA1c too far and fast in some patients can be harmful . This is a particularly important issue, where primary care is paid through the Quality Outcomes Framework (QoF), a general practice “pay for performance” programme . A major item within QoF, is the proportion of patients below HbA1c criteria: such reporting is not linked to rates of hypoglycaemia, ambulance call outs or hospitalisation, i.e., a practice could receive a high payment through achieving the QoF target, but with a high hospitalisation/ambulance callout rate.”
“nationwide audit data for England 2009–2010 showed that […] targets for HbA1c (≤7.5%/58.5 mmol/mol), blood pressure (BP) (<140/80 mmHg) and total cholesterol (<4.0 mmol/l) were achieved in only 67 %, 69% and 41 % of people with T2D.”
One thing that is perhaps worth noting here before moving any further is that the fact that you have actual data on this stuff is in itself indicative of an at least reasonable standard of care, compared to many places; in a lot of countries you just don’t have data on this kind of stuff, and it seems highly unlikely to me that the default assumption should be that things are going great in places where you do not have data on this kind of thing. Denmark also, incidentally, has a similar audit system, the results of which I’ve discussed in some detail before here on the blog).
“Our local audit data shows that approximately 85–90 % of patients with diabetes are managed by GPs and practice nurses in Coventry and Warwickshire. Only a small proportion of newly diagnosed patients with T2D (typically around 5–10 %) who attend the DESMOND (Diabetes Education and Self-Management for Ongoing and Newly Diagnosed) education programme come into contact with some aspect of the specialist services . […] Payment by results (PBR) has […] actively, albeit indirectly, disincentivised primary care to seek opinion from specialist services . […] Large volumes of data are collected by various services ranging between primary care, local laboratory facilities, ambulance services, hospital clinics (of varying specialties), retinal screening services and several allied healthcare professionals. However, the majority of these systems are not unified and therefore result in duplication of data collection and lack of data utilisation beyond the purpose of collection. This can result in missed opportunities, delayed communication, inability to use electronic solutions (prompts, alerts, algorithms etc.), inefficient use of resources and patient fatigue (repeated testing but no apparent benefit). Thus, in the majority of the regions in England, the delivery of diabetes care is disjointed and lacks integration. Each service collects and utilises data for their own “narrow” purpose, which could be used in a holistic way […] Potential consequences of the introduction of multiple service providers are fragmentation of care, reductions in continuity of care and propagation of a reluctance to refer on to a more specialist service . […] There are calls for more integration and less fragmentation in health-care , yet so far, the major integration projects in England have revealed negligible, if any, benefits [25, 32]. […] to provide high quality care and reduce the cost burden of diabetes, any integrated diabetes care models must prioritise prevention and early aggressive intervention over downstream interventions (secondary and tertiary prevention).”
“It is estimated that 99 % of diabetes care is self-management […] people with diabetes spend approximately only 3 h a year with healthcare professionals (versus 8757 h of self-management)” [this is a funny way of looking at things, which I’d never really considered before.]
“In a traditional model of diabetes care the rigid divide between primary and specialist care is exacerbated by the provision of funding. For example the tariff system used in England, to pay for activity in specialist care, can create incentives for one part of the system to “hold on” to patients who might be better treated elsewhere. This system was originally introduced to incentivise providers to increase elective activity and reduce waiting times. Whilst it has been effective for improving access to planned care, it is not so well suited to achieving the continuity of care needed to facilitate integrated care .”
“Currently in the UK there is a miss-match between what the healthcare policies require and what the workforce is actually being trained for. […] For true integrated care in diabetes and the other long term condition specialties to work, the education and training needs for both general practitioners and hospital specialists need to be more closely aligned.”
The chapter on Germany (Baden-Württemberg):
“An analysis of the Robert Koch-Institute (RKI) from 2012 shows that more than 50 % of German people over 65 years suffer from at least one chronic disease, approximately 50 % suffer from two to four chronic diseases, and over a quarter suffer from five or more diseases . […] Currently the public sector covers the majority (77 %) of health expenditures in Germany […] An estimated number of 56.3 million people are living with diabetes in Europe . […] The mean age of the T2DM-cohort [from Kinzigtal, Germany] in 2013 was 71.2 years and 53.5 % were women. In 2013 the top 5 co-morbidities of patients with T2DM were essential hypertension (78.3 %), dyslipidaemia (50.5 %), disorders of refraction and accommodation (38.2 %), back pain (33.8 %) and obesity (33.3 %). […] T2DM in Kinzigtal was associated with mean expenditure of 5,935.70 € per person in 2013 (not necessarily only for diabetes care ) including 40 % from inpatient stays, 24 % from drug prescriptions, 19 % from physician remuneration in ambulatory care and the rest from remedies and adjuvants (e.g., insulin pen systems, wheelchairs, physiotherapy, etc.), work incapacity or rehabilitation.”
“Zhang et al.  […] reported that globally, 12 % of health expenditures […] per person were spent on diabetes in 2010. The expenditure varies by region, age group, gender, and country’s income level.”
“Over the years many approaches [have been] introduced to improve the quality and continuity of care for chronic diseases. […] the Dutch minister of health approved, in 2007, the introduction of bundled-care (known is the Netherlands as a ‘chain-of-care’) approach for integrated chronic care, with special attention to diabetes. […] With a bundled payment approach – or episode-based payment – multiple providers are reimbursed a single sum of money for all services related to an episode of care (e.g., hospitalisation, including a period of post-acute care). This is in contrast to a reimbursement for each individual service (fee-for-service), and it is expected that this will reduce the volume of services provided and consequently lead to a reduction in spending. Since in a fee-for-service system the reimbursement is directly related to the volume of services provided, there is little incentive to reduce unnecessary care. The bundled payment approach promotes [in theory… – US] a more efficient use of services  […] As far as efficiency […] is concerned, after 3 years of evaluation, several changes in care processes have been observed, including task substitution from GPs to practice nurses and increased coordination of care [31, 36], thus improving process costs. However, Elissen et al.  concluded that the evidence relating to changes in process and outcome indicators, remains open to doubt, and only modest improvements were shown in most indicators. […] Overall, while the Dutch approach to integrated care, using a bundled payment system with a mixed payer approach, has created a limited improvement in integration, there is no evidence that the approach has reduced morbidity and premature mortality: and it has come at an increased cost.”
“In 2013 Sweden spent the equivalent of 4,904 USD per capita on health [OECD average: 3,453 USD], with 84 % of the expenditure coming from public sources [OECD average: 73 %]. […] Similarly high proportions [of public spending] can be found in the Netherlands (88 %), Norway (85 %) and Denmark (84 %) . […] Sweden’s quality registers, for tracking the quality of care that patients receive and the corresponding outcomes for several conditions, are among the most developed across the OECD . Yet, the coordination of care for patients with complex needs is less good. Only one in six patients had contact with a physician or specialist nurse after discharge from hospital for stroke, again with substantial variation across counties. Fewer than half of patients with type 1 diabetes […] have their blood pressure adequately controlled, with a considerable variation (from 26 % to 68 %) across counties . […] at 260 admissions per 100,000 people aged over 80, avoidable hospital admissions for uncontrolled diabetes in Sweden’s elderly population are the sixth highest in the OECD, and about 1.5 times higher than in Denmark.”
“Waiting times [in Sweden] have long been a cause of dissatisfaction . In an OECD ranking of 2011, Sweden was rated second worst . […] Sweden introduced a health-care guarantee in 2005 [guaranteeing fast access in some specific contexts]. […] Most patients who appeal under the health-care guarantee and [are] prioritised in the “queue” ha[ve] acute conditions rather than medical problems as a consequence of an underlying chronic disease. Patients waiting for a hip replacement or a cataract surgery are cured after surgery and no life-long follow-up is needed. When such patients are prioritised, the long-term care for patients with chronic diseases is “crowded out,” lowering their priority and risking worse outcomes. The health-care guarantee can therefore lead to longer intervals between checkups, with difficulties in accessing health care if their pre-existing condition has deteriorated.”
“Within each region / county council the care of patients with diabetes is divided. Patients with type 1 diabetes get their care at specialist clinics in hospitals and the majority of patients with type 2 diabetes in primary care . Patients with type 2 diabetes who have severe complications are referred to the Diabetes Clinics at the hospital. Approximately 10 % of all patients with type 2 continue their care at the hospital clinics. They are almost always on insulin in high doses often in combination with oral agents but despite massive medication many of these patients have difficulties to achieve metabolic balance. Patients with advanced complications such as foot ulcers, macroangiopathic manifestations and treatment with dialysis are also treated at the hospitals.”
Do keep in mind here that even if only 10% of type 2 patients are treated in a hospital setting, type 2 patients may still make up perhaps half or more of the diabetes patients treated in a hospital setting; type 2 prevalence is much, much higher than type 1 prevalence. Also, in view of such treatment- and referral patterns the default assumption when doing comparative subgroup analyses should always be that the outcomes of type 2 patients treated in a hospital setting should be expected to be much worse than the outcomes of type 2 patients treated in general practice; they’re in much poorer health than the diabetics treated in general practice, or they wouldn’t be treated in a hospital setting in the first place. A related point is that regardless of how great the hospitals are at treating the type 2 patients (maybe in some contexts there isn’t actually much of a difference in outcomes between these patients and type 2 patients treated in general practice, even though you’d expect there to be one?), that option will usually not be scalable. Also, it’s to be expected that these patients are more expensive than the default type 2 patient treated by his GP [and they definitely are: “Only if severe complications arise [in the context of a type 2 patient] is the care shifted to specialised clinics in hospitals. […] these patients have the most expensive care due to costly treatment of for example foot ulcers and renal insufficiency”]; again, they’re sicker and need more comprehensive care. They would need it even if they did not get it in a hospital setting, and there are costs associated with under-treatment as well.
“About 90 % of the children [with diabetes in Sweden] are classified as having Type 1 diabetes based on positive autoantibodies and a few percent receive a diagnosis of “Maturity Onset Diabetes of the Young” (MODY) . Type 2 diabetes among children is very rare in Sweden.”
Lastly, some observations from the final chapter:
“The paradox that we are dealing with is that in spite of health professionals wanting the best for their patients on a patient by patient basis, the way that individuals and institutions are organised and paid, directly influences the clinical decisions that are made. […] Naturally, optimising personal care and the provider/purchaser-commissioner budget may be aligned, but this is where diabetes poses substantial problems from a health system point of view: The majority of adverse diabetes outcomes […] are many years in the future, so a system based on this year’s budget will often not prioritise the future […] Even for these adverse “diabetes” outcomes, other clinical factors contribute to the end result. […] attribution to diabetes may not be so obvious to those seeking ways to minimise expenditure.”
[I incidentally tried to get this point across in a recent discussion on SSC, but I’m not actually sure the point was understood, presumably because I did not explain it sufficiently clearly or go into enough detail. It is my general impression, on a related note, that many people who would like to cut down on the sort of implicit public subsidization of unhealthy behaviours that most developed economies to some extent engage in these days do not understand well enough the sort of problems that e.g. the various attribution problems and how to optimize ‘post-diagnosis care’ (even if what you want to optimize is the cost minimization function…) cause in specific contexts. As I hope my comments indicate in that thread, I don’t think these sorts of issues can be ignored or dealt with in some very simple manner – and I’m tempted to say that if you think they can, you don’t know enough about these topics. I say that as one of those people who would like people who engage in risky behaviours to pay a larger (health) risk premium than they currently do].
[Continued from above, …problems from a health system point of view:]
“Payment for ambulatory diabetes care , which is essentially the preventative part of diabetes care, usually sits in a different budget to the inpatient budget where the big expenses are. […] good evidence for reducing hospitalisation through diabetes integrated care is limited […] There is ample evidence [11, 12] where clinicians own, and profit from, other services (e.g., laboratory, radiology), that referral rates are increased, often inappropriately […] Under the English NHS, the converse exists, where GPs, either holding health budgets, or receiving payments for maintaining health budgets , reduce their referrals to more specialist care. While this may be appropriate in many cases, it may result in delays and avoidance of referrals, even when specialist care is likely to be of benefit. [this would be the under-treatment I was talking about above…] […] There is a mantra that fragmentation of care and reductions in continuity of care are likely to harm the quality of care , but hard evidence is difficult to obtain.”
“The problems outlined above, suggest that any health system that fails to take account of the need to integrate the payment system from both an immediate and long term perspective, must be at greater risk of their diabetes integration attempts failing and/or being unsustainable. […] There are clearly a number of common factors and several that differ between successful and less successful models. […] Success in these models is usually described in terms of hospitalisation (including, e.g., DKA, amputation, cardiovascular disease events, hypoglycaemia, eye disease, renal disease, all cause), metabolic outcomes (e.g., HbA1c ), health costs and access to complex care. Some have described patient related outcomes, quality of life and other staff satisfaction, but the methodology and biases have often not been open to scrutiny. There are some methodological issues that suggest that many of those with positive results may be illusory and reflect the pre-existing landscape and/or wider changes, particular to that locality. […] The reported “success” of intermediate diabetes clinics run by English General Practitioners with a Special Interest led to extension of the model to other areas. This was finally tested in a randomised controlled trial […] and shown to be a more costly model with no real benefit for patients or the system. Similarly in East Cambs and Fenland, the 1 year results suggested major reductions in hospitalisation and costs in practices participating fully in the integrated care initiative, compared with those who “engaged” later . However, once the trends in neighbouring areas and among those without diabetes were accounted for, it became clear that the benefits originally reported were actually due to wider hospitalisation reductions, not just in those with diabetes. Studies of hospitalisation /hospital costs that do not compare with rates in the non-diabetic population need to be interpreted with caution.”
“Kaiser Permanente is often described as a great diabetes success story in the USA due to its higher than peer levels of, e.g., HbA1c testing . However, in the 2015 HEDIS data, levels of testing, metabolic control achieved and complication rates show quality metrics lower than the English NHS, in spite of the problems with the latter . Furthermore, HbA1c rates above 9 % remain at approximately 20 %, in Southern California  or 19 % in Northern California , a level much higher than that in the UK […] Similarly, the Super Six model […] has been lauded as a success, as a result of reductions in patients with, e.g., amputations. However, these complications were in the bottom quartile of performance for these outcomes in England  and hence improvement would be expected with the additional diabetes resources invested into the area. Amputation rates remain higher than the national average […] Studies showing improvement from a low baseline do not necessarily provide a best practice model, but perhaps a change from a system that required improvement. […] Several projects report improvements in HbA1c […] improvements in HbA1c, without reports of hypoglycaemia rates and weight gain, may be associated with worse outcomes as suggested from the ACCORD trial .”
My list of quotes on goodreads now includes 1333 quotes; these days I update that list much more often than I update my quote collection here on the blog.
i. “The graveyards are full of people the world could not do without.” (Elbert Hubbard)
ii. “The greatest mistake you can make in life is to be continually fearing you will make one.” (-ll-)
iii. “Do not dump your woes upon people — keep the sad story of your life to yourself. Troubles grow by recounting them.” (-ll-)
iv. “One of the first essentials in securing a good-natured equanimity is not to expect too much of the people amongst whom you dwell.” (William Osler)
v. “L’originalité consiste à essayer de faire comme tout le monde sans y parvenir.” (Raymond Radiguet. I decided to just post the original here because I didn’t like the English translation of the quote on wikiquotes)
vi. “Life is short, even for those who live a long time, and we must live for the few who know and appreciate us, who judge and absolve us, and for whom we have the same affection and indulgence. The rest I look upon as a mere crowd, lively or sad, loyal or corrupt, from whom there is nothing to be expected but fleeting emotions, either pleasant or unpleasant, which leave no trace behind them. We ought to hate very rarely, as it is too fatiguing; remain indifferent to a great deal, forgive often and never forget.” (Sarah Bernhardt)
vii. “There are no foolish questions and no man becomes a fool until he has stopped asking questions.” (Charles Proteus Steinmetz)
viii. “When it is useful to them, men can believe a theory of which they know nothing more than its name.” (Vilfredo Pareto)
ix. “Opinions upon moral questions are more often the expression of strongly felt expediency than of careful ethical reasoning; and the opinions so formed by one generation become the conscientious convictions or the sacred instincts of the next.” (Robert Gascoyne-Cecil)
x. “The commonest error in politics is sticking to the carcass of dead policies.” (-ll-)
xi. “If man knew how women pass the time when they are alone, they’d never marry.” (William Sydney Porter)
xii. “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.” (William Thomson, 1st Baron Kelvin)
xiii. “I know that I am honest and sincere in my desire to do well; but the question is whether I know enough to accomplish what I desire.” (Grover Cleveland)
xiv. “A fine quotation is a diamond on the finger of a man of wit, and a pebble in the hand of a fool.” (Joseph Roux)
xv. “There are men who are willing to marry a woman they do not care about merely because she is admired by other men. Such a relation exists between many men and their thoughts.” (Otto Weininger)
xvi. “Great inventions are never, and great discoveries are seldom, the work of any one mind. Every great invention is really an aggregation of minor inventions, or the final step of a progression. It is not usually a creation, but a growth, as truly so as is the growth of the trees in the forest.” (Robert Henry Thurston)
xvii. “Conscience is, in most men, an anticipation of the opinions of others.” (Henry Taylor)
xviii. “There is no error so monstrous that it fails to find defenders among the ablest men.” (John Emerich Edward Dalberg-Acton)
xix. “Originality consists in thinking for yourself, not in thinking differently from other people.” (James Fitzjames Stephen)
xx. “Does there, I wonder, exist a being who has read all, or approximately all, that the person of average culture is supposed to have read, and that not to have read is a social sin? If such a being does exist, surely he is an old, a very old man.” (Arnold Bennett)