Econstudentlog

Quotes

i. “Only the most uncritical minds are free from doubt.” (Aldo Leopold)

ii. “If you do not tell the truth about yourself you cannot tell it about other people.” (Virginia Woolf)

iii. “Though we see the same world, we see it through different eyes.” (-ll-)

iv. “No greater mistake can be made than to think that our institutions are fixed or may not be changed for the worse.” (Charles Evans Hughes)

v. “The image of ourselves in the minds of others is the picture of a stranger we shall never see.” (Elizabeth Bibesco)

vi. “Everybody continually tries to get away with as much as he can; and society is a marvelous machine which allows decent people to be cruel without realizing it.” (Émile Chartier)

vii. “When a man steals your wife, there is no better revenge than to let him keep her.” (Sacha Guitry)

viii. “Equipped with his five senses, man explores the universe around him and calls the adventure Science.” (Edwin Hubble)

ix. “There are two kinds of fools: one says, “This is old, therefore it is good”; the other says, “This is new, therefore it is better.” (William Ralph Inge)

x. “We know too many things that are not true.” (Charles Kettering)

xi. “There are truths which one can only say after having won the right to say them.” (Jean Cocteau)

xii. “Where all think alike, no one thinks very much.” (Walter Lippmann)

xiii. “It requires wisdom to understand wisdom: the music is nothing if the audience is deaf.” (-ll-)

xiv. “The past is a foreign country; they do things differently there.” (L.P. Hartley)

xv. “To know is not too demanding: it merely requires memory and time. But to understand is quite a different matter: it requires intellectual ability and training, a self conscious awareness of what one is doing, experience in techniques of analysis and synthesis, and above all, perspective.” (Carroll Quigley)

xvi. “The basis of social relationships is reciprocity: if you cooperate with others, others will cooperate with you.” (-ll-. But be careful…)

xvii. “Self-pity? I see no moral objections to it, the smell drives people away, but that’s a practical objection, and occasionally an advantage.” (E. M. Forster)

xviii. “You are neither right nor wrong because people agree with you.” (Benjamin Graham)

xix. “Men substitute words for reality and then argue about the words.” (Edwin Howard Armstrong)

xx. “Science aims at constructing a world which shall be symbolic of the world of commonplace experience. It is not at all necessary that every individual symbol that is used should represent something in common experience or even something explicable in terms of common experience. The man in the street is always making this demand for concrete explanation of the things referred to in science; but of necessity he must be disappointed. It is like our experience in learning to read. That which is written in a book is symbolic of a story in real life. The whole intention of the book is that ultimately a reader will identify some symbol, say BREAD, with one of the conceptions of familiar life. But it is mischievous to attempt such identifications prematurely, before the letters are strung into words and the words into sentences. The symbol A is not the counterpart of anything in familiar life.” (Arthur Eddington)

February 24, 2017 Posted by | quotes | Leave a comment

The Ageing Immune System and Health (II)

Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.

Lungs:

“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema [4]. Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age [6] increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance [9] as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”

“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals [6].”

Cancer:

“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEERincidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US] [6] show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients [7] and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.

Neurodegenerative diseases:

“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”

“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms [10]. These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc [10]. These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.

“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”

Frailty:

“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery [15], hip fractures [16] and oncology [17] have now begun to recognise frailty as an important predictor of mortality and morbidity.

I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:

“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up [70]. […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”

Exercise:

“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation [5]. […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer [7]. Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts [8]. […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”

February 24, 2017 Posted by | biology, books, medicine | Leave a comment

Economic Analysis in Healthcare (I)

“This book is written to provide […] a useful balance of theoretical treatment, description of empirical analyses and breadth of content for use in undergraduate modules in health economics for economics students, and for students taking a health economics module as part of their postgraduate training. Although we are writing from a UK perspective, we have attempted to make the book as relevant internationally as possible by drawing on examples, case studies and boxed highlights, not just from the UK, but from a wide range of countries”

I’m currently reading this book. The coverage has been somewhat disappointing because it’s mostly an undergraduate text which has so far mainly been covering concepts and ideas I’m already familiar with, but it’s not terrible – just okay-ish. I have added some observations from the first half of the book below.

“Health economics is the application of economic theory, models and empirical techniques to the analysis of decision making by people, health care providers and governments with respect to health and health care. […] Health economics has evolved into a highly specialised field, drawing on related disciplines including epidemiology, statistics, psychology, sociology, operations research and mathematics […] health economics is not shorthand for health care economics. […] Health economics studies not only the provision of health care, but also how this impacts on patients’ health. Other means by which health can be improved are also of interest, as are the determinants of ill-health. Health economics studies not only how health care affects population health, but also the effects of education, housing, unemployment and lifestyles.”

“Economic analyses have been used to explain the rise in obesity. […] The studies show that reasons for the rise in obesity include: *Technological innovation in food production and transportation that has reduced the cost of food preparation […] *Agricultural innovation and falling food prices that has led to an expansion in food supply […] *A decline in physical activity, both at home and at work […] *An increase in the number of fast-food outlets, resulting in changes to the relative prices of meals […]. *A reduction in the prevalence of smoking, which leads to increases in weight (Chou et al., 2004).”

“[T]he evidence is that ageing is in reality a relatively small factor in rising health care costs. The popular view is known as the ‘expansion of morbidity’ hypothesis. Gruenberg (1977) suggested that the decline in mortality that has led to an increase in the number of older people is because fewer people die from illnesses that they have, rather than because disease incidence and prevalence are lower. Lower mortality is therefore accompanied by greater morbidity and disability. However, Fries (1980) suggested an alternative hypothesis, ‘compression of morbidity’. Lower mortality rates are due to better health amongst the population, so people not only live longer, they are in better health when old. […] Zweifel et al. (1999) examined the hypothesis that the main determinant of high health care costs amongst older people is not the time since they were born, but the time until they die. Their results, confirmed by many subsequent studies, is that proximity to death does indeed explain higher health care costs better than age per se. Seshamani and Gray (2004) estimated that in the UK this is a factor up to 15 years before death, and annual costs increase tenfold during the last 5 years of life. The consensus is that ageing per se contributes little to the continuing rise in health expenditures that all countries face. Much more important drivers are improved quality of care, access to care, and more expensive new technology.”

“The difference between AC [average cost] and MC [marginal cost] is very important in applied health economics. Very often data are available on the average cost of health care services but not on their marginal cost. However, using average costs as if they were marginal costs may mislead. For example, hospital costs will be reduced by schemes that allow some patients to be treated in the community rather than being admitted. Given data on total costs of inpatient stays, it is possible to calculate an average cost per patient. It is tempting to conclude that avoiding an admission will reduce costs by that amount. However, the average includes patients with different levels of illness severity, and the more severe the illness the more costly they will be to treat. Less severely ill patients are most likely to be suitable for treatment in the community, so MC will be lower than AC. Such schemes will therefore produce a lower cost reduction than the estimate of AC suggests.
A problem with multi-product cost functions is that it is not possible to define meaningfully what the AC of a particular product is. If different products share some inputs, the costs of those inputs cannot be solely attributed to any one of them. […] In practice, when multi-product organisations such as hospitals calculate costs for particular products, they use accounting rules to share out the costs of all inputs and calculate average not marginal costs.”

“Studies of economies of scale in the health sector do not give a consistent and generalisable picture. […] studies of scope economies [also] do not show any consistent and generalisable picture. […] The impact of hospital ownership type on a range of key outcomes is generally ambiguous, with different studies yielding conflicting results. […] The association between hospital ownership and patient outcomes is unclear. The evidence is mixed and inconclusive regarding the impact of hospital ownership on access to care, morbidity, mortality, and adverse events.

“Public goods are goods that are consumed jointly by all consumers. The strict economics definition of a public good is that they have two characteristics. The first is non-rivalry. This means that the consumption of a good or service by one person does not prevent anyone else from consuming it. Non-rival goods therefore have large marginal external benefits, which make them socially very desirable but privately unprofitable to provide. Examples of nonrival goods are street lighting and pavements. The second is non-excludability. This means that it is not possible to provide a good or service to one person without letting others also consume it. […] This may lead to a free-rider problem, in which people are unwilling to pay for goods and services that are of value to them. […] Note the distinction between public goods, which are goods and services that are non-rival and non-excludable, and publicly provided goods, which are goods or services that are provided by the government for any reason. […] Most health care products and services are not public goods because they are both rival and excludable. […] However, some health care, particularly public health programmes, does have public good properties.”

“[H]ealth care is typically consumed under conditions of uncertainty with respect to the timing of health care expenditure […] and the amount of expenditure on health care that is required […] The usual solution to such problems is insurance. […] Adverse selection exists when exactly the wrong people, from the point of view of the insurance provider, choose to buy insurance: those with high risks. […] Those who are most likely to buy health insurance are those who have a relatively high probability of becoming ill and maybe also incur greater costs than the average when they are ill. […] Adverse selection arises because of the asymmetry of information between insured and insurer. […] Two approaches are adopted to prevent adverse selection. The first is experience rating, where the insurance provider sets a different insurance premium for different risk groups. Those who apply for health insurance might be asked to undergo a medical examination and
to disclose any relevant facts concerning their risk status.
[…] There are two problems with this approach. First, the cost of acquiring the appropriate information may be high. […] Secondly, it might encourage insurance providers to ‘cherry pick’ people, only choosing to provide insurance to the low risk. This may mean that high-risk people are unable to obtain health insurance at all. […] The second approach is to make health insurance compulsory. […] The problem with this is that low-risk people effectively subsidise the health insurance payments of those with higher risks, which may be regarded […] as inequitable.”

“Health insurance changes the economic incentives facing both the consumers and the providers of health care. One manifestation of these changes is the existence of moral hazard. This is a phenomenon common to all forms of insurance. The suggestion is that when people are insured against risks and their consequences, they are less careful about minimising them. […] Moral hazard arises when it is possible to alter the probability of the insured event, […] or the size of the insured loss […] The extent of the problem depends on the price elasticity of demand […] Three main mechanisms can be used to reduce moral hazard. The first is co-insurance. Many insurance policies require that when an event occurs the insured shares the insured loss […] with the insurer. The co-insurance rate is the percentage of the insured loss that is paid by the insured. The co-payment is the amount that they pay. […] The second is deductibles. A deductible is an amount of money the insured pays when a claim is made irrespective of co-insurance. The insurer will not pay the insured loss unless the deductible is paid by the insured. […] The third is no-claims bonuses. These are payments made by insurers to discourage claims. They usually take the form of reduced insurance premiums in the next period. […] No-claims bonuses typically discourage insurance claims where the payout by the insurer is small.

“The method of reimbursement relates to the way in which health care providers are paid for the services they provide. It is useful to distinguish between reimbursement methods, because they can affect the quantity and quality of health care. […] Retrospective reimbursement at full cost means that hospitals receive payment in full for all health care expenditures incurred in some pre-specified period of time. Reimbursement is retrospective in the sense that not only are hospitals paid after they have provided treatment, but also in that the size of the payment is determined after treatment is provided. […] Which model is used depends on whether hospitals are reimbursed for actual costs incurred, or on a fee-for-service (FFS) basis. […] Since hospital income [in these models] depends on the actual costs incurred (actual costs model) or on the volume of services provided (FFS model) there are few incentives to minimise costs. […] Prospective reimbursement implies that payments are agreed in advance and are not directly related to the actual costs incurred. […] incentives to reduce costs are greater, but payers may need to monitor the quality of care provided and access to services. If the hospital receives the same income regardless of quality, there is a financial incentive to provide low-quality care […] The problem from the point of view of the third-party payer is how best to monitor the activities of health care providers, and how to encourage them to act in a mutually beneficial way. This problem might be reduced if health care providers and third-party payers are linked in some way so that they share common goals. […] Integration between third-party payers and health care providers is a key feature of managed care.



One of the prospective imbursement models applied today may be of particular interest to Danes, as the DRG system is a big part of the financial model of the Danish health care system – so I’ve added a few details about this type of system below:

An example of prospectively set costs per case is the diagnostic-related groups (DRG) pricing scheme introduced into the Medicare system in the USA in 1984, and subsequently used in a number of other countries […] Under this scheme, DRG payments are based on average costs per case in each diagnostic group derived from a sample of hospitals. […] Predicted effects of the DRG pricing scheme are cost shifting, patient shifting and DRG creep. Cost shifting and patient shifting are ways of circumventing the cost-minimising effects of DRG pricing by shifting patients or some of the services provided to patients out of the DRG pricing scheme and into other parts of the system not covered by DRG pricing. For example, instead of being provided on an inpatient basis, treatment might be provided on an outpatient basis where it is reimbursed retrospectively. DRG creep arises when hospitals classify cases into DRGs that carry a higher payment, indicating that they are more complicated than they really are. This might arise, for instance, when cases have multiple diagnoses.”

February 20, 2017 Posted by | books, economics, health care | Leave a comment

Rocks: A very short introduction

I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.

“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”

“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.

The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”

“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”

“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”

“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”

“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”

“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”

“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”

Some links of interest:

Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). BrecciaAeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.

February 19, 2017 Posted by | biology, books, Geology | Leave a comment

What Do Europeans Think About Muslim Immigration?

Here’s the link. I don’t usually cover this sort of stuff, but I have quoted extensively from the report below because this is some nice data, and nice data sometimes disappear from the internet if you don’t copy it in time.

The sample sizes here are large (“The total number of respondents was 10,195 (c. 1,000 per country).”) and a brief skim of the wiki article about Chatham House hardly gives the impression that this is an extreme right-wing think tank with a hidden agenda (for example Hilary Clinton received the Chatham House Prize just a few years ago). Data was gathered online, which of course might lead to slightly different results than offline data procurement strategies, but if anything this to me seems to imply that the opposition seen in the data might more likely be a lower bound estimate than an upper bound estimate; older people, rural people and people with lower education levels are all more opposed than their counterparts, according to the data, and these people are less likely to be online, so they should probably all else equal be expected if anything to be under-sampled in a data set relying exclusively on data provided online. Note incidentally that if you wanted to you could probably sort of infer some implicit effect sizes; e.g. by comparing the differences relating to age and education, it seems that age is the far more important variable, at least if your interest is in the people who agree with the statement provided by Chatham House (of course when you only have data like this you should be very careful about making inferences about the importance of specific variables, but I can’t help noting here that part of the education variable/effect may just be a hidden age effect; I’m reasonably certain education levels have increased over time in all countries surveyed).

“Drawing on a unique, new Chatham House survey of more than 10,000 people from 10 European states, we can throw new light on what people think about migration from mainly Muslim countries. […] respondents were given the following statement: ‘All further migration from mainly Muslim countries should be stopped’. They were then asked to what extent did they agree or disagree with this statement. Overall, across all 10 of the European countries an average of 55% agreed that all further migration from mainly Muslim countries should be stopped, 25% neither agreed nor disagreed and 20% disagreed.

Majorities in all but two of the ten states agreed, ranging from 71% in Poland, 65% in Austria, 53% in Germany and 51% in Italy to 47% in the United Kingdom and 41% in Spain. In no country did the percentage that disagreed surpass 32%.”

fig-1

“Public opposition to further migration from Muslim states is especially intense in Austria, Poland, Hungary, France and Belgium, despite these countries having very different sized resident Muslim populations. In each of these countries, at least 38% of the sample ‘strongly agreed’ with the statement. […]  across Europe, opposition to Muslim immigration is especially intense among retired, older age cohorts while those aged below 30 are notably less opposed. There is also a clear education divide. Of those with secondary level qualifications, 59% opposed further Muslim immigration. By contrast, less than half of all degree holders supported further migration curbs.”

fig-2

“Of those living in rural, less populated areas, 58% are opposed to further Muslim immigration. […] among those based in cities and metropolitan areas just over half agree with the statement and around a quarter are less supportive of a ban. […] nearly two-thirds of those who feel they don’t have control over their own lives [supported] the statement. Similarly, 65% of those Europeans who are dissatisfied with their life oppose further migration from Muslim countries. […] These results chime with other surveys exploring attitudes to Islam in Europe. In a Pew survey of 10 European countries in 2016, majorities of the public had an unfavorable view of Muslims living in their country in five countries: Hungary (72%), Italy (69%), Poland (66%), Greece (65%), and Spain (50%), although those numbers were lower in the UK (28%), Germany (29%) and France (29%). There was also a widespread perception in many countries that the arrival of refugees would increase the likelihood of terrorism, with a median of 59% across ten European countries holding this view.”

February 15, 2017 Posted by | current affairs, data, demographics | Leave a comment

Words

I’ve usually in the past combined these lists with other stuff, but I am now strongly considering making these lists into posts of their own in order to make a potential lack of ‘other stuff’ to include in such posts less likely to stop me from posting the words; stuff I don’t blog is more likely to get lost to my memory, so I don’t want to give myself any more excuses not to blog stuff I want to remember/learn than I have to. Most of the words are from books I’ve read over the last weeks, I rarely spend time on vocabulary.com these days (I don’t encounter enough new words on the site these days to justify a significant amount of activity there; there are too many review questions, likely a result of me having mastered words much faster than they’ve added new ones..).

I’ve by now decided to stop (more-or-less…-) systematically checking in each case if I’ve already included a word on a similar list in a previous post; not all the words on these lists from now on will necessarily be ‘new’ to me (to the extent that the words on the previous lists have been, that is…) – so some of these words (and the words to come, assuming other posts will follow) are likely just words I’ve forgot about, and some are words I simply consider to be ‘nice’/’unappreciated’/’not encountered often enough’… I decided to split the words in this post up into smaller groups of words, as one big chunk of words looked slightly ‘scary’ and unapproachable to me. There’s no system to the groupings, the words were originally randomly added to a list I keep of words I knew I’d want to get back to at some point and the cut-offs I later applied when writing this post were more or less completely arbitrary. If you want non-arbitrary groups of interesting words, I refer to the goodreads lists.

Saudade, malapertauriferousfrissonanchorite, lacquerermisoneismcamarilla, cloy, cooper, prevaricatory, impugn, prestidigitation, compeer, lapidary, contumely, contumelious.

Dotard, creel, parricide, assonance, habiliment, assail, mimesis, investiture, irruption, tenuity, tribulation, analectic, succour, augercanker, apophthegm, haruspex, rapine.

Sward, chafferer, argol, sprightly, disport, eyas, garishly, teeter, flocculent, crick, dandle, picaresque, newelanamnesis, imprecateemically, mulch, sommelier, julienne.

Logomachy, chockablock, fusty, diarchy, perfervid, estivationlogy, tumescence, portcullislox, unprocurable, admonitory, kelp, enjambment, lithography.

February 14, 2017 Posted by | language | Leave a comment

Anesthesia

“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

Anaesthesia.
General anaesthesia.
Muscle relaxant.
Nociception.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Laudanum.
Dwale.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
Chloroform.
James Young Simpson.
Joseph Thomas Clover.
Barbiturates.
Inhalational anaesthetic.
Antisialagogue.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Propofol.
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngoscopy.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Etomidate.
Ketamine.
Neuromuscular-blocking drug.
Neostigmine.
Sugammadex.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Amylocaine.
Procaine.
Lidocaine.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | books, Chemistry, medicine, papers, Pharmacology | Leave a comment

Particle Physics

20090213

20090703

(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)

The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.

Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.

“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.

In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.

The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”

“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”

“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”

“About 400 billion neutrinos from the Sun pass through each one of us each second.”

“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”

Matter.
Particle.
Particle physics.
Strong interaction.
Weak interaction (‘good article’).
Electron (featured).
Quark (featured).
Fundamental interactions.
Electronvolt.
Electromagnetic spectrum.
Cathode ray.
Alpha particle.
Cloud chamber.
Atomic spectroscopy.
Ionization.
Resonance (particle physics).
Spin (physics).
Beta decay.
Neutrino.
Neutrino astronomy.
Antiparticle.
Baryon/meson.
Pion.
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Collider.
B-factory.
Particle detector.
Cherenkov radiation.
Sudbury Neutrino Observatory.
Quantum chromodynamics.
Color charge.
Force carrier.
W and Z bosons.
Electroweak interaction (/theory).
Exotic matter.
Strangeness.
Strange quark.
Charm (quantum number).
Antimatter.
Inverse beta decay.
Dark matter.
Standard model.
Supersymmetry.
Higgs boson.
Quark–gluon plasma.
CP violation.

February 9, 2017 Posted by | books, Physics | Leave a comment

Books 2017

Below is a list of books I’ve read in 2017.

The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.

As usual I’ll try to update the post regularly throughout the year.

1. Brief Candles (3, f). Manning Coles.

2. Galaxies: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

3. Mirabile (2, f). Janet Kagan. Short goodreads review here.

4. Blackout (5, f). Connie Willis. Goodreads review here (note that this review is a ‘composite review’ of both Blackout and All Clear).

5. All Clear (5, f). Connie Willis.

6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

7. A Knight of the Seven Kingdoms (3, f). George R. R. Martin. Goodreads review here.

8. The Economics of International Immigration (1, nf. Springer). Goodreads review here.

9. American Gods (2, f). Neil Gaiman. Short goodreads review here – I was not impressed.

10. The Story of the Stone (3, f). Barry Hughart. Goodreads review here.

11. Particle Physics: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

12. The Wallet of Kai Lung (4, f). Ernest Bramah. Goodreads review here.

13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.

14. Kai Lung Unrolls His Mat (4, f). Ernest Bramah. Goodreads review here.

15. Anaesthesia: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

16. The Moon of Much Gladness (5, f). Ernest Bramah. Goodreads review here.

17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.

18. Rocks: A very short introduction (3, nf. Oxford University Press). Blog coverage here.

19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.

20. Economic Analysis in Healthcare (2, nf. Wiley). Blog coverage here.

21. The Best of Connie Willis: Award-Winning Stories (f.). Connie Willis. Goodreads review here.

February 9, 2017 Posted by | books | Leave a comment

The Laws of Thermodynamics

Here’s a relevant 60 symbols video with Mike Merrifield. Below a few observations from the book, and some links.

“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”

Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”

“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”

“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”

“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.

“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”

“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”

Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:

Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
System concepts.
Intensive and extensive properties.
Mechanical equilibrium.
Thermal equilibrium.
Diathermal wall.
Thermodynamic temperature.
Thermodynamic beta.
Ludwig Boltzmann.
Boltzmann constant.
Maxwell–Boltzmann distribution.
Conservation of energy.
Work (physics).
Internal energy.
Heat (physics).
Microscopic view of heat.
Reversible process (thermodynamics).
Carnot’s theorem.
Enthalpy.
Fluctuation-dissipation theorem.
Noether’s theorem.
Entropy.
Thermal efficiency.
Rudolf Clausius.
Spontaneous process.
Residual entropy.
Heat engine.
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
Phase transition.
Chemical equilibrium.
Superconductivity.
Superfluidity.
Absolute zero.

February 5, 2017 Posted by | books, Physics | Leave a comment

Galaxies

I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.

“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”

“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”

“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”

“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”

“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”

“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”

“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”

“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”

“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”

“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”

Links of interest:

Galaxy (‘featured article’).
Leonard Digges.
Thomas Wright.
William Herschel.
William Parsons.
The Great Debate.
Parallax.
Extinction (astronomy).
Henrietta Swan Leavitt (‘good article’).
Cepheid variable.
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Vesto Slipher.
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
Disc galaxy.
Edwin Hubble.
Milton Humason.
Doppler effect.
Milky Way.
Orion Arm.
Stellar population.
Sagittarius A*.
Minkowski space.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Malmquist bias.
Type Ia supernova.
Dark energy.
Baryons/leptons.
Cosmic microwave background.
Cold dark matter.
Lambda-CDM model.
Lenticular galaxy.
Active galactic nucleus.
Quasar.
Hubble Ultra-Deep Field.
Stellar evolution.
Velocity dispersion.
Hawking radiation.
Ultimate fate of the universe.

 

February 5, 2017 Posted by | astronomy, books, cosmology, Physics | Leave a comment

Diabetes and the Brain (III)

Some quotes from the book below.

Tests that are used in clinical neuropsychology in most cases examine one or more aspects of cognitive domains, which are theoretical constructs in which a multitude of cognitive processes are involved. […] By definition, a subdivision in cognitive domains is arbitrary, and many different classifications exist. […] for a test to be recommended, several criteria must be met. First, a test must have adequate reliability: the test must yield similar outcomes when applied over multiple test sessions, i.e., have good test–retest reliability. […] Furthermore, the interobserver reliability is important, in that the test must have a standardized assessment procedure and is scored in the same manner by different examiners. Second, the test must have adequate validity. Here, different forms of validity are important. Content validity is established by expert raters with respect to item formulation, item selection, etc. Construct validity refers to the underlying theoretical construct that the test is assumed to measure. To assess construct validity, both convergent and divergent validities are important. Convergent validity refers to the amount of agreement between a given test and other tests that measure the same function. In turn, a test with a good divergent validity correlates minimally with tests that measure other cognitive functions. Moreover, predictive validity (or criterion validity) is related to the degree of correlation between the test score and an external criterion, for example, the correlation between a cognitive test and functional status. […] it should be stressed that cognitive tests alone cannot be used as ultimate proof for organic brain damage, but should be used in combination with more direct measures of cerebral abnormalities, such as neuroimaging.”

“Intelligence is a theoretically ill-defined construct. In general, it refers to the ability to think in an abstract manner and solve new problems. Typically, two forms of intelligence are distinguished, crystallized intelligence (academic skills and knowledge that one has acquired during schooling) and fluid intelligence (the ability to solve new problems). Crystallized intelligence is better preserved in patients with brain disease than fluid intelligence (3). […] From a neuropsychological viewpoint, the concept of intelligence as a unitary construct (often referred to as g-factor) does not provide valuable information, since deficits in specific cognitive functions may be averaged out in the total IQ score. Thus, in most neuropsychological studies, intelligence tests are included because of specific subtests that are assumed to measure specific cognitive functions, and the performance profile is analyzed rather than considering the IQ measure as a compound score in isolation.”

“Attention is a concept that in general relates to the selection of relevant information from our environment and the suppression of irrelevant information (selective or “focused” attention), the ability to shift attention between tasks (divided attention), and to maintain a state of alertness to incoming stimuli over longer periods of time (concentration and vigilance). Many different structures in the human brain are involved in attentional processing and, consequently, disorders in attention occur frequently after brain disease or damage (21). […] Speed of information processing is not a localized cognitive function, but depends greatly on the integrity of the cerebral network as a whole, the subcortical white matter and the interhemispheric and intrahemispheric connections. It is one of the cognitive functions that clearly declines with age and it is highly susceptible to brain disease or dysfunction of any kind.”

“The MiniMental State Examination (MMSE) is a screening instrument that has been developed to determine whether older adults have cognitive impairments […] numerous studies have shown that the MMSE has poor sensitivity and specificity, as well as a low-test–retest reliability […] the MMSE has been developed to determine cognitive decline that is typical for Alzheimer’s dementia, but has been found less useful in determining cognitive decline in nondemented patients (44) or in patients with other forms of dementia. This is important since odds ratios for both vascular dementia and Alzheimer’s dementia are increased in diabetes (45). Notwithstanding this increased risk, most patients with diabetes have subtle cognitive deficits (46, 47) that may easily go undetected using gross screening instruments such as the MMSE. For research in diabetes a high sensitivity is thus especially important. […] ceiling effects in test performance often result in a lack of sensitivity. Subtle impairments are easily missed, resulting in a high proportion of false-negative cases […] In general, tests should be cognitively demanding to avoid ceiling effects in patients with mild cognitive dysfunction.[…] sensitive domains such as speed of information processing, (working) memory, attention, and executive function should be examined thoroughly in diabetes patients, whereas other domains such as language, motor function, and perception are less likely to be affected. Intelligence should always be taken into account, and confounding factors such as mood, emotional distress, and coping are crucial for the interpretation of the neuropsychological test results.”

“The life-time risk of any dementia has been estimated to be more than 1 in 5 for women and 1 in 6 for men (2). Worldwide, about 24 million people have dementia, with 4.6 million new cases of dementia every year (3). […] Dementia can be caused by various underlying diseases, the most common of which is Alzheimer’s disease (AD) accounting for roughly 70% of cases in the elderly. The second most common cause of dementia is vascular dementia (VaD), accounting for 16% of cases. Other, less common, causes include dementia with Lewy bodies (DLB) and frontotemporal lobar degeneration (FTLD). […] It is estimated that both the incidence and the prevalence [of AD] double with every 5-year increase in age. Other risk factors for AD include female sex and vascular risk factors, such as diabetes, hypercholesterolaemia and hypertension […] In contrast with AD, progression of cognitive deficits [in VaD] is mostly stepwise and with an acute or subacute onset. […] it is clear that cerebrovascular disease is one of the major causes of cognitive decline. Vascular risk factors such as diabetes mellitus and hypertension have been recognized as risk factors for VaD […] Although pure vascular dementia is rare, cerebrovascular pathology is frequently observed on MRI and in pathological studies of patients clinically diagnosed with AD […] Evidence exists that AD and cerebrovascular pathology act synergistically (60).”

“In type 1 diabetes the annual prevalence of severe hypoglycemia (requiring help for recovery) is 30–40% while the annual incidence varies depending on the duration of diabetes. In insulin-treated type 2 diabetes, the frequency is lower but increases with duration of insulin therapy. […] In normal health, blood glucose is maintained within a very narrow range […] The functioning of the brain is optimal within this range; cognitive function rapidly becomes impaired when the blood glucose falls below 3.0 mmol/l (54 mg/dl) (3). Similarly, but much less dramatically, cognitive function deteriorates when the brain is exposed to high glucose concentrations” (I did not know the latter for certain, but I certainly have had my suspicions for a long time).

“When exogenous insulin is injected into a non-diabetic adult human, peripheral tissues such as skeletal muscle and adipose tissue rapidly take up glucose, while hepatic glucose output is suppressed. This causes blood glucose to fall and triggers a series of counterregulatory events to counteract the actions of insulin; this prevents a progressive decline in blood glucose and subsequently reverses the hypoglycemia. In people with insulin-treated diabetes, many of the homeostatic mechanisms that regulate blood glucose are either absent or deficient. [If you’re looking for more details on these topics, it should perhaps be noted here that Philip Cryer’s book on these topics is very nice and informative]. […] The initial endocrine response to a fall in blood glucose in non-diabetic humans is the suppression of endogenous insulin secretion. This is followed by the secretion of the principal counterregulatory hormones, glucagon and epinephrine (adrenaline) (5). Cortisol and growth hormone also contribute, but have greater importance in promoting recovery during exposure to prolonged hypoglycemia […] Activation of the peripheral sympathetic nervous system and the adrenal glands provokes the release of a copious quantity of catecholamines, epinephrine, and norepinephrine […] Glucagon is secreted from the alpha cells of the pancreatic islets, apparently in response to localized neuroglycopenia and independent of central neural control. […] The large amounts of catecholamines that are secreted in response to hypoglycemia exert other powerful physiological effects that are unrelated to counterregulation. These include major hemodynamic actions with direct effects on the heart and blood pressure. […] regional blood flow changes occur during hypoglycemia that encourages the transport of substrates to the liver for gluconeogenesis and simultaneously of glucose to the brain. Organs that have no role in the response to acute stress, such as the spleen and kidneys, are temporarily under-perfused. The mobilisation and activation of white blood cells are accompanied by hemorheological effects, promoting increased viscosity, coagulation, and fibrinolysis and may influence endothelial function (6). In normal health these acute physiological changes probably exert no harmful effects, but may acquire pathological significance in people with diabetes of long duration.”

“The more complex and attention-demanding cognitive tasks, and those that require speeded responses are more affected by hypoglycemia than simple tasks or those that do not require any time restraint (3). The overall speed of response of the brain in making decisions is slowed, yet for many tasks, accuracy is preserved at the expense of speed (8, 9). Many aspects of mental performance become impaired when blood glucose falls below 3.0 mmol/l […] Recovery of cognitive function does not occur immediately after the blood glucose returns to normal, but in some cognitive domains may be delayed for 60 min or more (3), which is of practical importance to the performance of tasks that require complex cognitive functions, such as driving. […] [the] major changes that occur during hypoglycemia – counterregulatory hormone secretion, symptom generation, and cognitive dysfunction – occur as components of a hierarchy of responses, each being triggered as the blood glucose falls to its glycemic threshold. […] In nondiabetic individuals, the glycemic thresholds are fixed and reproducible (10), but in people with diabetes, these thresholds are dynamic and plastic, and can be modified by external factors such as glycemic control or exposure to preceding (antecedent) hypoglycemia (11). Changes in the glycemic thresholds for the responses to hypoglycemia underlie the effects of the acquired hypoglycemia syndromes that can develop in people with insulin-treated diabetes […] the incidence of severe hypoglycemia in people with insulin-treated type 2 diabetes increases steadily with duration of insulin therapy […], as pancreatic beta-cell failure develops. The under-recognized risk of severe hypoglycemia in insulin-treated type 2 diabetes is of great practical importance as this group is numerically much larger than people with type 1 diabetes and encompasses many older, and some very elderly, people who may be exposed to much greater danger because they often have co-morbidities such as macrovascular disease, osteoporosis, and general frailty.”

“Hypoglycemia occurs when a mismatch develops between the plasma concentrations of glucose and insulin, particularly when the latter is inappropriately high, which is common during the night. Hypoglycemia can result when too much insulin is injected relative to oral intake of carbohydrate or when a meal is missed or delayed after insulin has been administered. Strenuous exercise can precipitate hypoglycemia through accelerated absorption of insulin and depletion of muscle glycogen stores. Alcohol enhances the risk of prolonged hypoglycemia by inhibiting hepatic gluconeogenesis, but the hypoglycemia may be delayed for several hours. Errors of dosage or timing of insulin administration are common, and there are few conditions where the efficacy of the treatment can be influenced by so many extraneous factors. The time–action profiles of different insulins can be modified by factors such as the ambient temperature or the site and depth of injection and the person with diabetes has to constantly try to balance insulin requirement with diet and exercise. It is therefore not surprising that hypoglycemia occurs so frequently. […] The lower the median blood glucose during the day, the greater the frequency
of symptomatic and biochemical hypoglycemia […] Strict glycemic control can […] induce the acquired hypoglycemia syndromes, impaired awareness of hypoglycemia (a major risk factor for severe hypoglycemia), and counterregulatory hormonal deficiencies (which interfere with blood glucose recovery). […] Severe hypoglycemia is more common at the extremes of age – in very young children and in elderly people.
[…] In type 1 diabetes the frequency of severe hypoglycemia increases with duration of diabetes (12), while in type 2 diabetes it is associated with increasing duration of insulin treatment (18). […] Around one quarter of all episodes of severe hypoglycemia result in coma […] In 10% of episodes of severe hypoglycemia affecting people with type 1 diabetes and around 30% of those in people with insulin-treated type 2 diabetes, the assistance of the emergency medical services is required (23). However, most episodes (both mild and severe) are treated in the community, and few people require admission to hospital.”

“Severe hypoglycemia is potentially dangerous and has a significant mortality and morbidity, particularly in older people with insulin-treated diabetes who often have premature macrovascular disease. The hemodynamic effects of autonomic stimulation may provoke acute vascular events such as myocardial ischemia and infarction, cardiac failure, cerebral ischemia, and stroke (6). In clinical practice the cardiovascular and cerebrovascular consequences of hypoglycemia are frequently overlooked because the role of hypoglycemia in precipitating the vascular event is missed. […] The profuse secretion of catecholamines in response to hypoglycemia provokes a fall in plasma potassium and causes electrocardiographic (ECG) changes, which in some individuals may provoke a cardiac arrhythmia […]. A possible mechanism that has been observed with ECG recordings during hypoglycemia is prolongation of the QT interval […]. Hypoglycemia-induced arrhythmias during sleep have been implicated as the cause of the “dead in bed” syndrome that is recognized in young people with type 1 diabetes (40). […] Total cerebral blood flow is increased during acute hypoglycemia while regional blood flow within the brain is altered acutely. Blood flow increases in the frontal cortex, presumably as a protective compensatory mechanism to enhance the supply of available glucose to the most vulnerable part of the brain. These regional vascular changes become permanent in people who are exposed to recurrent severe hypoglycemia and in those with impaired awareness of hypoglycemia, and are then present during normoglycemia (41). This probably represents an adaptive response of the brain to recurrent exposure to neuroglycopenia. However, these permanent hypoglycemia-induced changes in regional cerebral blood flow may encourage localized neuronal ischemia, particularly if the cerebral circulation is already compromised by the development of cerebrovascular disease associated with diabetes. […] Hypoglycemia-induced EEG changes can persist for days or become permanent, particularly after recurrent severe hypoglycemia”.

“In the large British Diabetic Association Cohort Study of people who had developed type 1 diabetes before the age of 30, acute metabolic complications of diabetes were the greatest single cause of excess death under the age of 30; hypoglycemia was the cause of death in 18% of males and 6% of females in the 20–49 age group (47).”

“[The] syndromes of counterregulatory hormonal deficiencies and impaired awareness of hypoglycemia (IAH) develop over a period of years and ultimately affect a substantial proportion of people with type 1 diabetes and a lesser number with insulin-treated type 2 diabetes. They are considered to be components of hypoglycemia-associated autonomic failure (HAAF), through down-regulation of the central mechanisms within the brain that would normally activate glucoregulatory responses to hypoglycemia, including the release of counterregulatory hormones and the generation of warning symptoms (48). […] The glucagon secretory response to hypoglycemia becomes diminished or absent within a few years of the onset of insulin-deficient diabetes. With glucagon deficiency alone, blood glucose recovery from hypoglycemia is not noticeably affected because the secretion of epinephrine maintains counterregulation. However, almost half of those who have type 1 diabetes of 20 years duration have evidence of impairment of both glucagon and epinephrine in response to hypoglycemia (49); this seriously delays blood glucose recovery and allows progression to more severe and prolonged hypoglycemia when exposed to low blood glucose. People with type 1 diabetes who have these combined counterregulatory hormonal deficiencies have a 25-fold higher risk of experiencing severe hypoglycemia if they are subjected to intensive insulin therapy compared with those who have lost their glucagon response but have retained epinephrine secretion […] Impaired awareness is not an “all or none” phenomenon. “Partial” impairment of awareness may develop, with the individual being aware of some episodes of hypoglycemia but not others (53). Alternatively, the intensity or number of symptoms may be reduced, and neuroglycopenic symptoms predominate. […] total absence of any symptoms, albeit subtle, is very uncommon […] IAH affects 20–25% of patients with type 1 diabetes (11, 55) and less than 10% with type 2 diabetes (24), becomes more prevalent with increasing duration of diabetes (12) […], and predisposes the patient to a sixfold higher risk of severe hypoglycemia than people who retain normal awareness (56). When IAH is associated with strict glycemic control during intensive insulin therapy or has followed episodes of recurrent severe hypoglycemia, it may be reversible by relaxing glycemic control or by avoiding further hypoglycemia (11), but in many patients with type 1 diabetes of long duration, it appears to be a permanent defect. […] The modern management of diabetes strives to achieve strict glycemic control using intensive therapy to avoid or minimize the long-term complications of diabetes; this strategy tends to increase the risk of hypoglycemia and promotes development of the acquired hypoglycemia syndromes.”

February 5, 2017 Posted by | books, diabetes, medicine, Neurology | Leave a comment