Econstudentlog

The Ageing Immune System and Health (II)

Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.

Lungs:

“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema [4]. Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age [6] increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance [9] as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”

“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals [6].”

Cancer:

“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEERincidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US] [6] show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients [7] and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.

Neurodegenerative diseases:

“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”

“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms [10]. These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc [10]. These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.

“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”

Frailty:

“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery [15], hip fractures [16] and oncology [17] have now begun to recognise frailty as an important predictor of mortality and morbidity.

I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:

“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up [70]. […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”

Exercise:

“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation [5]. […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer [7]. Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts [8]. […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”

February 24, 2017 Posted by | biology, books, medicine | Leave a comment

Economic Analysis in Healthcare (I)

“This book is written to provide […] a useful balance of theoretical treatment, description of empirical analyses and breadth of content for use in undergraduate modules in health economics for economics students, and for students taking a health economics module as part of their postgraduate training. Although we are writing from a UK perspective, we have attempted to make the book as relevant internationally as possible by drawing on examples, case studies and boxed highlights, not just from the UK, but from a wide range of countries”

I’m currently reading this book. The coverage has been somewhat disappointing because it’s mostly an undergraduate text which has so far mainly been covering concepts and ideas I’m already familiar with, but it’s not terrible – just okay-ish. I have added some observations from the first half of the book below.

“Health economics is the application of economic theory, models and empirical techniques to the analysis of decision making by people, health care providers and governments with respect to health and health care. […] Health economics has evolved into a highly specialised field, drawing on related disciplines including epidemiology, statistics, psychology, sociology, operations research and mathematics […] health economics is not shorthand for health care economics. […] Health economics studies not only the provision of health care, but also how this impacts on patients’ health. Other means by which health can be improved are also of interest, as are the determinants of ill-health. Health economics studies not only how health care affects population health, but also the effects of education, housing, unemployment and lifestyles.”

“Economic analyses have been used to explain the rise in obesity. […] The studies show that reasons for the rise in obesity include: *Technological innovation in food production and transportation that has reduced the cost of food preparation […] *Agricultural innovation and falling food prices that has led to an expansion in food supply […] *A decline in physical activity, both at home and at work […] *An increase in the number of fast-food outlets, resulting in changes to the relative prices of meals […]. *A reduction in the prevalence of smoking, which leads to increases in weight (Chou et al., 2004).”

“[T]he evidence is that ageing is in reality a relatively small factor in rising health care costs. The popular view is known as the ‘expansion of morbidity’ hypothesis. Gruenberg (1977) suggested that the decline in mortality that has led to an increase in the number of older people is because fewer people die from illnesses that they have, rather than because disease incidence and prevalence are lower. Lower mortality is therefore accompanied by greater morbidity and disability. However, Fries (1980) suggested an alternative hypothesis, ‘compression of morbidity’. Lower mortality rates are due to better health amongst the population, so people not only live longer, they are in better health when old. […] Zweifel et al. (1999) examined the hypothesis that the main determinant of high health care costs amongst older people is not the time since they were born, but the time until they die. Their results, confirmed by many subsequent studies, is that proximity to death does indeed explain higher health care costs better than age per se. Seshamani and Gray (2004) estimated that in the UK this is a factor up to 15 years before death, and annual costs increase tenfold during the last 5 years of life. The consensus is that ageing per se contributes little to the continuing rise in health expenditures that all countries face. Much more important drivers are improved quality of care, access to care, and more expensive new technology.”

“The difference between AC [average cost] and MC [marginal cost] is very important in applied health economics. Very often data are available on the average cost of health care services but not on their marginal cost. However, using average costs as if they were marginal costs may mislead. For example, hospital costs will be reduced by schemes that allow some patients to be treated in the community rather than being admitted. Given data on total costs of inpatient stays, it is possible to calculate an average cost per patient. It is tempting to conclude that avoiding an admission will reduce costs by that amount. However, the average includes patients with different levels of illness severity, and the more severe the illness the more costly they will be to treat. Less severely ill patients are most likely to be suitable for treatment in the community, so MC will be lower than AC. Such schemes will therefore produce a lower cost reduction than the estimate of AC suggests.
A problem with multi-product cost functions is that it is not possible to define meaningfully what the AC of a particular product is. If different products share some inputs, the costs of those inputs cannot be solely attributed to any one of them. […] In practice, when multi-product organisations such as hospitals calculate costs for particular products, they use accounting rules to share out the costs of all inputs and calculate average not marginal costs.”

“Studies of economies of scale in the health sector do not give a consistent and generalisable picture. […] studies of scope economies [also] do not show any consistent and generalisable picture. […] The impact of hospital ownership type on a range of key outcomes is generally ambiguous, with different studies yielding conflicting results. […] The association between hospital ownership and patient outcomes is unclear. The evidence is mixed and inconclusive regarding the impact of hospital ownership on access to care, morbidity, mortality, and adverse events.

“Public goods are goods that are consumed jointly by all consumers. The strict economics definition of a public good is that they have two characteristics. The first is non-rivalry. This means that the consumption of a good or service by one person does not prevent anyone else from consuming it. Non-rival goods therefore have large marginal external benefits, which make them socially very desirable but privately unprofitable to provide. Examples of nonrival goods are street lighting and pavements. The second is non-excludability. This means that it is not possible to provide a good or service to one person without letting others also consume it. […] This may lead to a free-rider problem, in which people are unwilling to pay for goods and services that are of value to them. […] Note the distinction between public goods, which are goods and services that are non-rival and non-excludable, and publicly provided goods, which are goods or services that are provided by the government for any reason. […] Most health care products and services are not public goods because they are both rival and excludable. […] However, some health care, particularly public health programmes, does have public good properties.”

“[H]ealth care is typically consumed under conditions of uncertainty with respect to the timing of health care expenditure […] and the amount of expenditure on health care that is required […] The usual solution to such problems is insurance. […] Adverse selection exists when exactly the wrong people, from the point of view of the insurance provider, choose to buy insurance: those with high risks. […] Those who are most likely to buy health insurance are those who have a relatively high probability of becoming ill and maybe also incur greater costs than the average when they are ill. […] Adverse selection arises because of the asymmetry of information between insured and insurer. […] Two approaches are adopted to prevent adverse selection. The first is experience rating, where the insurance provider sets a different insurance premium for different risk groups. Those who apply for health insurance might be asked to undergo a medical examination and
to disclose any relevant facts concerning their risk status.
[…] There are two problems with this approach. First, the cost of acquiring the appropriate information may be high. […] Secondly, it might encourage insurance providers to ‘cherry pick’ people, only choosing to provide insurance to the low risk. This may mean that high-risk people are unable to obtain health insurance at all. […] The second approach is to make health insurance compulsory. […] The problem with this is that low-risk people effectively subsidise the health insurance payments of those with higher risks, which may be regarded […] as inequitable.”

“Health insurance changes the economic incentives facing both the consumers and the providers of health care. One manifestation of these changes is the existence of moral hazard. This is a phenomenon common to all forms of insurance. The suggestion is that when people are insured against risks and their consequences, they are less careful about minimising them. […] Moral hazard arises when it is possible to alter the probability of the insured event, […] or the size of the insured loss […] The extent of the problem depends on the price elasticity of demand […] Three main mechanisms can be used to reduce moral hazard. The first is co-insurance. Many insurance policies require that when an event occurs the insured shares the insured loss […] with the insurer. The co-insurance rate is the percentage of the insured loss that is paid by the insured. The co-payment is the amount that they pay. […] The second is deductibles. A deductible is an amount of money the insured pays when a claim is made irrespective of co-insurance. The insurer will not pay the insured loss unless the deductible is paid by the insured. […] The third is no-claims bonuses. These are payments made by insurers to discourage claims. They usually take the form of reduced insurance premiums in the next period. […] No-claims bonuses typically discourage insurance claims where the payout by the insurer is small.

“The method of reimbursement relates to the way in which health care providers are paid for the services they provide. It is useful to distinguish between reimbursement methods, because they can affect the quantity and quality of health care. […] Retrospective reimbursement at full cost means that hospitals receive payment in full for all health care expenditures incurred in some pre-specified period of time. Reimbursement is retrospective in the sense that not only are hospitals paid after they have provided treatment, but also in that the size of the payment is determined after treatment is provided. […] Which model is used depends on whether hospitals are reimbursed for actual costs incurred, or on a fee-for-service (FFS) basis. […] Since hospital income [in these models] depends on the actual costs incurred (actual costs model) or on the volume of services provided (FFS model) there are few incentives to minimise costs. […] Prospective reimbursement implies that payments are agreed in advance and are not directly related to the actual costs incurred. […] incentives to reduce costs are greater, but payers may need to monitor the quality of care provided and access to services. If the hospital receives the same income regardless of quality, there is a financial incentive to provide low-quality care […] The problem from the point of view of the third-party payer is how best to monitor the activities of health care providers, and how to encourage them to act in a mutually beneficial way. This problem might be reduced if health care providers and third-party payers are linked in some way so that they share common goals. […] Integration between third-party payers and health care providers is a key feature of managed care.



One of the prospective imbursement models applied today may be of particular interest to Danes, as the DRG system is a big part of the financial model of the Danish health care system – so I’ve added a few details about this type of system below:

An example of prospectively set costs per case is the diagnostic-related groups (DRG) pricing scheme introduced into the Medicare system in the USA in 1984, and subsequently used in a number of other countries […] Under this scheme, DRG payments are based on average costs per case in each diagnostic group derived from a sample of hospitals. […] Predicted effects of the DRG pricing scheme are cost shifting, patient shifting and DRG creep. Cost shifting and patient shifting are ways of circumventing the cost-minimising effects of DRG pricing by shifting patients or some of the services provided to patients out of the DRG pricing scheme and into other parts of the system not covered by DRG pricing. For example, instead of being provided on an inpatient basis, treatment might be provided on an outpatient basis where it is reimbursed retrospectively. DRG creep arises when hospitals classify cases into DRGs that carry a higher payment, indicating that they are more complicated than they really are. This might arise, for instance, when cases have multiple diagnoses.”

February 20, 2017 Posted by | books, economics, health care | Leave a comment

Rocks: A very short introduction

I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.

“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”

“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.

The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”

“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”

“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”

“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”

“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”

“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”

“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”

Some links of interest:

Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). BrecciaAeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.

February 19, 2017 Posted by | biology, books, Geology | Leave a comment

Anesthesia

“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

Anaesthesia.
General anaesthesia.
Muscle relaxant.
Nociception.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Laudanum.
Dwale.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
Chloroform.
James Young Simpson.
Joseph Thomas Clover.
Barbiturates.
Inhalational anaesthetic.
Antisialagogue.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Propofol.
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngoscopy.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Etomidate.
Ketamine.
Neuromuscular-blocking drug.
Neostigmine.
Sugammadex.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Amylocaine.
Procaine.
Lidocaine.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | books, Chemistry, medicine, papers, Pharmacology | Leave a comment

Particle Physics

20090213

20090703

(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)

The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.

Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.

“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.

In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.

The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”

“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”

“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”

“About 400 billion neutrinos from the Sun pass through each one of us each second.”

“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”

Matter.
Particle.
Particle physics.
Strong interaction.
Weak interaction (‘good article’).
Electron (featured).
Quark (featured).
Fundamental interactions.
Electronvolt.
Electromagnetic spectrum.
Cathode ray.
Alpha particle.
Cloud chamber.
Atomic spectroscopy.
Ionization.
Resonance (particle physics).
Spin (physics).
Beta decay.
Neutrino.
Neutrino astronomy.
Antiparticle.
Baryon/meson.
Pion.
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Collider.
B-factory.
Particle detector.
Cherenkov radiation.
Sudbury Neutrino Observatory.
Quantum chromodynamics.
Color charge.
Force carrier.
W and Z bosons.
Electroweak interaction (/theory).
Exotic matter.
Strangeness.
Strange quark.
Charm (quantum number).
Antimatter.
Inverse beta decay.
Dark matter.
Standard model.
Supersymmetry.
Higgs boson.
Quark–gluon plasma.
CP violation.

February 9, 2017 Posted by | books, Physics | Leave a comment

Books 2017

Below is a list of books I’ve read in 2017.

The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.

As usual I’ll try to update the post regularly throughout the year.

1. Brief Candles (3, f). Manning Coles.

2. Galaxies: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

3. Mirabile (2, f). Janet Kagan. Short goodreads review here.

4. Blackout (5, f). Connie Willis. Goodreads review here (note that this review is a ‘composite review’ of both Blackout and All Clear).

5. All Clear (5, f). Connie Willis.

6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.

7. A Knight of the Seven Kingdoms (3, f). George R. R. Martin. Goodreads review here.

8. The Economics of International Immigration (1, nf. Springer). Goodreads review here.

9. American Gods (2, f). Neil Gaiman. Short goodreads review here – I was not impressed.

10. The Story of the Stone (3, f). Barry Hughart. Goodreads review here.

11. Particle Physics: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

12. The Wallet of Kai Lung (4, f). Ernest Bramah. Goodreads review here.

13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.

14. Kai Lung Unrolls His Mat (4, f). Ernest Bramah. Goodreads review here.

15. Anaesthesia: A Very Short Introduction (3, nf. Oxford University Press). Blog coverage here.

16. The Moon of Much Gladness (5, f). Ernest Bramah. Goodreads review here.

17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.

18. Rocks: A very short introduction (3, nf. Oxford University Press). Blog coverage here.

19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.

20. Economic Analysis in Healthcare (2, nf. Wiley). Blog coverage here.

21. The Best of Connie Willis: Award-Winning Stories (f.). Connie Willis. Goodreads review here.

February 9, 2017 Posted by | books | Leave a comment

The Laws of Thermodynamics

Here’s a relevant 60 symbols video with Mike Merrifield. Below a few observations from the book, and some links.

“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”

Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”

“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”

“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”

“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.

“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”

“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”

Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:

Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
System concepts.
Intensive and extensive properties.
Mechanical equilibrium.
Thermal equilibrium.
Diathermal wall.
Thermodynamic temperature.
Thermodynamic beta.
Ludwig Boltzmann.
Boltzmann constant.
Maxwell–Boltzmann distribution.
Conservation of energy.
Work (physics).
Internal energy.
Heat (physics).
Microscopic view of heat.
Reversible process (thermodynamics).
Carnot’s theorem.
Enthalpy.
Fluctuation-dissipation theorem.
Noether’s theorem.
Entropy.
Thermal efficiency.
Rudolf Clausius.
Spontaneous process.
Residual entropy.
Heat engine.
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
Phase transition.
Chemical equilibrium.
Superconductivity.
Superfluidity.
Absolute zero.

February 5, 2017 Posted by | books, Physics | Leave a comment

Galaxies

I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.

“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”

“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”

“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”

“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”

“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”

“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”

“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”

“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”

“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”

“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”

Links of interest:

Galaxy (‘featured article’).
Leonard Digges.
Thomas Wright.
William Herschel.
William Parsons.
The Great Debate.
Parallax.
Extinction (astronomy).
Henrietta Swan Leavitt (‘good article’).
Cepheid variable.
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Vesto Slipher.
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
Disc galaxy.
Edwin Hubble.
Milton Humason.
Doppler effect.
Milky Way.
Orion Arm.
Stellar population.
Sagittarius A*.
Minkowski space.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Malmquist bias.
Type Ia supernova.
Dark energy.
Baryons/leptons.
Cosmic microwave background.
Cold dark matter.
Lambda-CDM model.
Lenticular galaxy.
Active galactic nucleus.
Quasar.
Hubble Ultra-Deep Field.
Stellar evolution.
Velocity dispersion.
Hawking radiation.
Ultimate fate of the universe.

 

February 5, 2017 Posted by | astronomy, books, cosmology, Physics | Leave a comment

Diabetes and the Brain (III)

Some quotes from the book below.

Tests that are used in clinical neuropsychology in most cases examine one or more aspects of cognitive domains, which are theoretical constructs in which a multitude of cognitive processes are involved. […] By definition, a subdivision in cognitive domains is arbitrary, and many different classifications exist. […] for a test to be recommended, several criteria must be met. First, a test must have adequate reliability: the test must yield similar outcomes when applied over multiple test sessions, i.e., have good test–retest reliability. […] Furthermore, the interobserver reliability is important, in that the test must have a standardized assessment procedure and is scored in the same manner by different examiners. Second, the test must have adequate validity. Here, different forms of validity are important. Content validity is established by expert raters with respect to item formulation, item selection, etc. Construct validity refers to the underlying theoretical construct that the test is assumed to measure. To assess construct validity, both convergent and divergent validities are important. Convergent validity refers to the amount of agreement between a given test and other tests that measure the same function. In turn, a test with a good divergent validity correlates minimally with tests that measure other cognitive functions. Moreover, predictive validity (or criterion validity) is related to the degree of correlation between the test score and an external criterion, for example, the correlation between a cognitive test and functional status. […] it should be stressed that cognitive tests alone cannot be used as ultimate proof for organic brain damage, but should be used in combination with more direct measures of cerebral abnormalities, such as neuroimaging.”

“Intelligence is a theoretically ill-defined construct. In general, it refers to the ability to think in an abstract manner and solve new problems. Typically, two forms of intelligence are distinguished, crystallized intelligence (academic skills and knowledge that one has acquired during schooling) and fluid intelligence (the ability to solve new problems). Crystallized intelligence is better preserved in patients with brain disease than fluid intelligence (3). […] From a neuropsychological viewpoint, the concept of intelligence as a unitary construct (often referred to as g-factor) does not provide valuable information, since deficits in specific cognitive functions may be averaged out in the total IQ score. Thus, in most neuropsychological studies, intelligence tests are included because of specific subtests that are assumed to measure specific cognitive functions, and the performance profile is analyzed rather than considering the IQ measure as a compound score in isolation.”

“Attention is a concept that in general relates to the selection of relevant information from our environment and the suppression of irrelevant information (selective or “focused” attention), the ability to shift attention between tasks (divided attention), and to maintain a state of alertness to incoming stimuli over longer periods of time (concentration and vigilance). Many different structures in the human brain are involved in attentional processing and, consequently, disorders in attention occur frequently after brain disease or damage (21). […] Speed of information processing is not a localized cognitive function, but depends greatly on the integrity of the cerebral network as a whole, the subcortical white matter and the interhemispheric and intrahemispheric connections. It is one of the cognitive functions that clearly declines with age and it is highly susceptible to brain disease or dysfunction of any kind.”

“The MiniMental State Examination (MMSE) is a screening instrument that has been developed to determine whether older adults have cognitive impairments […] numerous studies have shown that the MMSE has poor sensitivity and specificity, as well as a low-test–retest reliability […] the MMSE has been developed to determine cognitive decline that is typical for Alzheimer’s dementia, but has been found less useful in determining cognitive decline in nondemented patients (44) or in patients with other forms of dementia. This is important since odds ratios for both vascular dementia and Alzheimer’s dementia are increased in diabetes (45). Notwithstanding this increased risk, most patients with diabetes have subtle cognitive deficits (46, 47) that may easily go undetected using gross screening instruments such as the MMSE. For research in diabetes a high sensitivity is thus especially important. […] ceiling effects in test performance often result in a lack of sensitivity. Subtle impairments are easily missed, resulting in a high proportion of false-negative cases […] In general, tests should be cognitively demanding to avoid ceiling effects in patients with mild cognitive dysfunction.[…] sensitive domains such as speed of information processing, (working) memory, attention, and executive function should be examined thoroughly in diabetes patients, whereas other domains such as language, motor function, and perception are less likely to be affected. Intelligence should always be taken into account, and confounding factors such as mood, emotional distress, and coping are crucial for the interpretation of the neuropsychological test results.”

“The life-time risk of any dementia has been estimated to be more than 1 in 5 for women and 1 in 6 for men (2). Worldwide, about 24 million people have dementia, with 4.6 million new cases of dementia every year (3). […] Dementia can be caused by various underlying diseases, the most common of which is Alzheimer’s disease (AD) accounting for roughly 70% of cases in the elderly. The second most common cause of dementia is vascular dementia (VaD), accounting for 16% of cases. Other, less common, causes include dementia with Lewy bodies (DLB) and frontotemporal lobar degeneration (FTLD). […] It is estimated that both the incidence and the prevalence [of AD] double with every 5-year increase in age. Other risk factors for AD include female sex and vascular risk factors, such as diabetes, hypercholesterolaemia and hypertension […] In contrast with AD, progression of cognitive deficits [in VaD] is mostly stepwise and with an acute or subacute onset. […] it is clear that cerebrovascular disease is one of the major causes of cognitive decline. Vascular risk factors such as diabetes mellitus and hypertension have been recognized as risk factors for VaD […] Although pure vascular dementia is rare, cerebrovascular pathology is frequently observed on MRI and in pathological studies of patients clinically diagnosed with AD […] Evidence exists that AD and cerebrovascular pathology act synergistically (60).”

“In type 1 diabetes the annual prevalence of severe hypoglycemia (requiring help for recovery) is 30–40% while the annual incidence varies depending on the duration of diabetes. In insulin-treated type 2 diabetes, the frequency is lower but increases with duration of insulin therapy. […] In normal health, blood glucose is maintained within a very narrow range […] The functioning of the brain is optimal within this range; cognitive function rapidly becomes impaired when the blood glucose falls below 3.0 mmol/l (54 mg/dl) (3). Similarly, but much less dramatically, cognitive function deteriorates when the brain is exposed to high glucose concentrations” (I did not know the latter for certain, but I certainly have had my suspicions for a long time).

“When exogenous insulin is injected into a non-diabetic adult human, peripheral tissues such as skeletal muscle and adipose tissue rapidly take up glucose, while hepatic glucose output is suppressed. This causes blood glucose to fall and triggers a series of counterregulatory events to counteract the actions of insulin; this prevents a progressive decline in blood glucose and subsequently reverses the hypoglycemia. In people with insulin-treated diabetes, many of the homeostatic mechanisms that regulate blood glucose are either absent or deficient. [If you’re looking for more details on these topics, it should perhaps be noted here that Philip Cryer’s book on these topics is very nice and informative]. […] The initial endocrine response to a fall in blood glucose in non-diabetic humans is the suppression of endogenous insulin secretion. This is followed by the secretion of the principal counterregulatory hormones, glucagon and epinephrine (adrenaline) (5). Cortisol and growth hormone also contribute, but have greater importance in promoting recovery during exposure to prolonged hypoglycemia […] Activation of the peripheral sympathetic nervous system and the adrenal glands provokes the release of a copious quantity of catecholamines, epinephrine, and norepinephrine […] Glucagon is secreted from the alpha cells of the pancreatic islets, apparently in response to localized neuroglycopenia and independent of central neural control. […] The large amounts of catecholamines that are secreted in response to hypoglycemia exert other powerful physiological effects that are unrelated to counterregulation. These include major hemodynamic actions with direct effects on the heart and blood pressure. […] regional blood flow changes occur during hypoglycemia that encourages the transport of substrates to the liver for gluconeogenesis and simultaneously of glucose to the brain. Organs that have no role in the response to acute stress, such as the spleen and kidneys, are temporarily under-perfused. The mobilisation and activation of white blood cells are accompanied by hemorheological effects, promoting increased viscosity, coagulation, and fibrinolysis and may influence endothelial function (6). In normal health these acute physiological changes probably exert no harmful effects, but may acquire pathological significance in people with diabetes of long duration.”

“The more complex and attention-demanding cognitive tasks, and those that require speeded responses are more affected by hypoglycemia than simple tasks or those that do not require any time restraint (3). The overall speed of response of the brain in making decisions is slowed, yet for many tasks, accuracy is preserved at the expense of speed (8, 9). Many aspects of mental performance become impaired when blood glucose falls below 3.0 mmol/l […] Recovery of cognitive function does not occur immediately after the blood glucose returns to normal, but in some cognitive domains may be delayed for 60 min or more (3), which is of practical importance to the performance of tasks that require complex cognitive functions, such as driving. […] [the] major changes that occur during hypoglycemia – counterregulatory hormone secretion, symptom generation, and cognitive dysfunction – occur as components of a hierarchy of responses, each being triggered as the blood glucose falls to its glycemic threshold. […] In nondiabetic individuals, the glycemic thresholds are fixed and reproducible (10), but in people with diabetes, these thresholds are dynamic and plastic, and can be modified by external factors such as glycemic control or exposure to preceding (antecedent) hypoglycemia (11). Changes in the glycemic thresholds for the responses to hypoglycemia underlie the effects of the acquired hypoglycemia syndromes that can develop in people with insulin-treated diabetes […] the incidence of severe hypoglycemia in people with insulin-treated type 2 diabetes increases steadily with duration of insulin therapy […], as pancreatic beta-cell failure develops. The under-recognized risk of severe hypoglycemia in insulin-treated type 2 diabetes is of great practical importance as this group is numerically much larger than people with type 1 diabetes and encompasses many older, and some very elderly, people who may be exposed to much greater danger because they often have co-morbidities such as macrovascular disease, osteoporosis, and general frailty.”

“Hypoglycemia occurs when a mismatch develops between the plasma concentrations of glucose and insulin, particularly when the latter is inappropriately high, which is common during the night. Hypoglycemia can result when too much insulin is injected relative to oral intake of carbohydrate or when a meal is missed or delayed after insulin has been administered. Strenuous exercise can precipitate hypoglycemia through accelerated absorption of insulin and depletion of muscle glycogen stores. Alcohol enhances the risk of prolonged hypoglycemia by inhibiting hepatic gluconeogenesis, but the hypoglycemia may be delayed for several hours. Errors of dosage or timing of insulin administration are common, and there are few conditions where the efficacy of the treatment can be influenced by so many extraneous factors. The time–action profiles of different insulins can be modified by factors such as the ambient temperature or the site and depth of injection and the person with diabetes has to constantly try to balance insulin requirement with diet and exercise. It is therefore not surprising that hypoglycemia occurs so frequently. […] The lower the median blood glucose during the day, the greater the frequency
of symptomatic and biochemical hypoglycemia […] Strict glycemic control can […] induce the acquired hypoglycemia syndromes, impaired awareness of hypoglycemia (a major risk factor for severe hypoglycemia), and counterregulatory hormonal deficiencies (which interfere with blood glucose recovery). […] Severe hypoglycemia is more common at the extremes of age – in very young children and in elderly people.
[…] In type 1 diabetes the frequency of severe hypoglycemia increases with duration of diabetes (12), while in type 2 diabetes it is associated with increasing duration of insulin treatment (18). […] Around one quarter of all episodes of severe hypoglycemia result in coma […] In 10% of episodes of severe hypoglycemia affecting people with type 1 diabetes and around 30% of those in people with insulin-treated type 2 diabetes, the assistance of the emergency medical services is required (23). However, most episodes (both mild and severe) are treated in the community, and few people require admission to hospital.”

“Severe hypoglycemia is potentially dangerous and has a significant mortality and morbidity, particularly in older people with insulin-treated diabetes who often have premature macrovascular disease. The hemodynamic effects of autonomic stimulation may provoke acute vascular events such as myocardial ischemia and infarction, cardiac failure, cerebral ischemia, and stroke (6). In clinical practice the cardiovascular and cerebrovascular consequences of hypoglycemia are frequently overlooked because the role of hypoglycemia in precipitating the vascular event is missed. […] The profuse secretion of catecholamines in response to hypoglycemia provokes a fall in plasma potassium and causes electrocardiographic (ECG) changes, which in some individuals may provoke a cardiac arrhythmia […]. A possible mechanism that has been observed with ECG recordings during hypoglycemia is prolongation of the QT interval […]. Hypoglycemia-induced arrhythmias during sleep have been implicated as the cause of the “dead in bed” syndrome that is recognized in young people with type 1 diabetes (40). […] Total cerebral blood flow is increased during acute hypoglycemia while regional blood flow within the brain is altered acutely. Blood flow increases in the frontal cortex, presumably as a protective compensatory mechanism to enhance the supply of available glucose to the most vulnerable part of the brain. These regional vascular changes become permanent in people who are exposed to recurrent severe hypoglycemia and in those with impaired awareness of hypoglycemia, and are then present during normoglycemia (41). This probably represents an adaptive response of the brain to recurrent exposure to neuroglycopenia. However, these permanent hypoglycemia-induced changes in regional cerebral blood flow may encourage localized neuronal ischemia, particularly if the cerebral circulation is already compromised by the development of cerebrovascular disease associated with diabetes. […] Hypoglycemia-induced EEG changes can persist for days or become permanent, particularly after recurrent severe hypoglycemia”.

“In the large British Diabetic Association Cohort Study of people who had developed type 1 diabetes before the age of 30, acute metabolic complications of diabetes were the greatest single cause of excess death under the age of 30; hypoglycemia was the cause of death in 18% of males and 6% of females in the 20–49 age group (47).”

“[The] syndromes of counterregulatory hormonal deficiencies and impaired awareness of hypoglycemia (IAH) develop over a period of years and ultimately affect a substantial proportion of people with type 1 diabetes and a lesser number with insulin-treated type 2 diabetes. They are considered to be components of hypoglycemia-associated autonomic failure (HAAF), through down-regulation of the central mechanisms within the brain that would normally activate glucoregulatory responses to hypoglycemia, including the release of counterregulatory hormones and the generation of warning symptoms (48). […] The glucagon secretory response to hypoglycemia becomes diminished or absent within a few years of the onset of insulin-deficient diabetes. With glucagon deficiency alone, blood glucose recovery from hypoglycemia is not noticeably affected because the secretion of epinephrine maintains counterregulation. However, almost half of those who have type 1 diabetes of 20 years duration have evidence of impairment of both glucagon and epinephrine in response to hypoglycemia (49); this seriously delays blood glucose recovery and allows progression to more severe and prolonged hypoglycemia when exposed to low blood glucose. People with type 1 diabetes who have these combined counterregulatory hormonal deficiencies have a 25-fold higher risk of experiencing severe hypoglycemia if they are subjected to intensive insulin therapy compared with those who have lost their glucagon response but have retained epinephrine secretion […] Impaired awareness is not an “all or none” phenomenon. “Partial” impairment of awareness may develop, with the individual being aware of some episodes of hypoglycemia but not others (53). Alternatively, the intensity or number of symptoms may be reduced, and neuroglycopenic symptoms predominate. […] total absence of any symptoms, albeit subtle, is very uncommon […] IAH affects 20–25% of patients with type 1 diabetes (11, 55) and less than 10% with type 2 diabetes (24), becomes more prevalent with increasing duration of diabetes (12) […], and predisposes the patient to a sixfold higher risk of severe hypoglycemia than people who retain normal awareness (56). When IAH is associated with strict glycemic control during intensive insulin therapy or has followed episodes of recurrent severe hypoglycemia, it may be reversible by relaxing glycemic control or by avoiding further hypoglycemia (11), but in many patients with type 1 diabetes of long duration, it appears to be a permanent defect. […] The modern management of diabetes strives to achieve strict glycemic control using intensive therapy to avoid or minimize the long-term complications of diabetes; this strategy tends to increase the risk of hypoglycemia and promotes development of the acquired hypoglycemia syndromes.”

February 5, 2017 Posted by | books, diabetes, medicine, Neurology | Leave a comment

Books 2016

Below I have posted a list of the 156 books I read to completion in 2016, as well as links to blog posts covering the books and reviews of the books which I’ve written on goodreads. At the bottom of the post I have also added the 7 books I did not finish this year, as well as some related links and comments. The post you read now is unlikely to be the final edition of this post, as I’ll continue to add links and comments to the post also in 2017 if/when I blog or review books mentioned below.

As I also mentioned earlier in the year, I have been reading a lot of fiction this year and not enough non-fiction. Regarding the ‘technical aspects’ of the list below, as usual the letters ‘f’ and ‘nf.’ in the parentheses correspond to ‘fiction’ and ‘non-fiction’, respectively, whereas the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.

I did a brief count of the books on the list and concluded that the list includes 30 books categorized as non-fiction, 20 books in the miscellaneous category, and 106 books categorized as fiction. As usual non-fiction works published by Springer make up a substantial proportion of the non-fiction books I read (20 %), with another 20 % accounted for by Oxford University Press, Princeton University Press and Wiley/Wiley-Blackwell. Some of the authors in the fiction category have also featured on the lists previously (Christie, Wodehouse, Bryson), but other names are new – new names include: Dick Francis (39 books), Tom Sharpe (16 books), David Sedaris (7 books), Mario Puzo (4 books), Gerard Durrell (3 books), and Connie Willis (3 books).

I shared my ‘year in books’ on goodreads, and that link includes a few summary stats as well as cover images of the books (annoyingly a large-ish proportion of the non-fiction books have not added cover pictures, but it’s even so a neat visualization tool). With 156 books finished this year I read almost exactly 3 books per week on average, and the goodreads tools also tell me that I read 47.281 pages during the year. As I don’t believe goodreads includes the page counts of partially read books in that tool, this is probably a slight underestimate but it’s in that neighbourhood anyway; this corresponds to ~130 pages per day on average (129,5) throughout the year, or roughly 900 pages per week. The average length of the books I finished was 309 pages, again according to goodreads.

Since I started blogging, I have published roughly 500 posts about books I’ve read – I actually realized while writing this post that the next post I publish on this site categorized under ‘books’ will be post number 500 in that category. As should be obvious from the list below, as a rule I do not cover fiction books on this blog, aside from in the context of quote posts where I may occasionally include a few quotes from books I’ve read (I decided early on not to include links to such posts on lists like these, as that would be too much work). In the context of quotes I should probably add here to readers not already aware of this that I recently decided to move/copy a large number of quotes from this site to goodreads, and that I now update my goodreads quote collection more frequently than I do the quote collection on this blog; at this point, my quote collection on goodreads includes 1347 quotes. For a few more details about this aspect of the goodreads site, see incidentally this post.

Both Dick Francis and Connie Willis were introduced to me by the SSC commentariat and this link includes a lot of other author recommendations which might be of interest to you. I should perhaps also note before moving on to the list that I have recently added a not-insignificant number of books to my list of favourite books on goodreads. I have (retrospectively) slightly modified my implicit selection criteria for adding books to the list; previously if a book had taught me a lot but I did not give it a five star rating or I figured it wasn’t at least very close to perfect, it wasn’t going to get anywhere near my list of favourite books. I figured recently that perhaps I should also include on the list books which had taught me a lot, books that had changed my way of looking at the world, even if they were not very close to perfect in most respects. I’m still not quite sure what is the best categorization approach, but as of now the list includes some books which did not feature on the list in the near past and I figured I might mention the list explicitly here also because people perusing a list like the one below are presumably in part doing it because they’re looking for good books to read, and my inclusion of a book on that list can still at least be taken to be a qualified recommendation of the book.

1. 4.50 from Paddington (4, f). Agatha Christie.

2. Explaining Behavior: Reasons in a World of Causes (1, nf. Bradford Book). Goodreads review here.

3. Hickory Dickory Dock (3, f). Agatha Christie.

4. Death Comes As the End (3, f). Agatha Christie. Short goodreads review here.

5. At Bertram’s Hotel (3, f). Agatha Christie. Very short goodreads review here.

6. A Caribbean Mystery (3, f). Agatha Christie.

7. A Rulebook for Arguments (Hackett Student Handbooks) (1, nf. Hackett Publishing). Very short goodreads review here.

8. The Clocks (2, f). Agatha Christie.

9. Third Girl (2, f). Agatha Christie. Very short goodreads review here.

10. The Misanthrope (2, f). Molière. Very short goodreads review here.

11. The Secret Adversary (2, f). Agatha Christie. Short goodreads review here.

12. The Social Psychology of Nonverbal Communication (2, nf. Palgrave Macmillan). Goodreads review here. Blog coverage here.

13. N or M? (2, f). Agatha Christie. Goodreads review – with spoilers – here.

14. The Emergence of Norms (4, nf. Oxford University Press). Goodreads review here. Blog coverage here.

15. By the Pricking of My Thumbs (2, f). Agatha Christie.

16. The Godfather (4, f). Mario Puzo.

17. Partners in Crime (1, f). Agatha Christie. Short goodreads review here.

18. Elephants can Remember (1, f). Agatha Christie. Short goodreads review here.

19. Hallowe’en Party (1, f). Agatha Christie. Short goodreads review here.

20. French Leave (4, f). P. G. Wodehouse. Short goodreads review here.

21. A Few Quick Ones (3, f). P. G. Wodehouse.

22. Ice in the Bedroom (4, f). P. G. Wodehouse.

23. Over Seventy (4, f). P. G. Wodehouse. Short goodreads review here.

24. The Secret of Chimneys (2, f). Agatha Christie.

25. World Regions in Global Context (1, nf. Prentice Hall). Very long (600+ pages of content). Goodreads review here.

26. Something Fishy (3, f). P. G. Wodehouse.

27. Do Butlers Burgle Banks? (3,f). P.G. Wodehouse.

28. The Mirror Crack’d from Side to Side (1, f). Agatha Christie. Boring story, almost didn’t finish it.

29. Frozen Assets (4, f). P. G. Wodehouse.

30. A Cooperative Species: Human Reciprocity and Its Evolution (5, nf. Princeton University Press). Goodreads review here. Blog coverage here.

31. If I Were You (4, f). P. G. Wodehouse.

32. On the Shortness of Life (nf.). Seneca the Younger.

33. Barmy in Wonderland (3, f). P. G. Wodehouse.

34. A Gentleman of Leisure (3, f). P. G. Wodehouse. Very short goodreads review here.

35. Pearls, Girls and Monty Bodkin (5, f). P. G. Wodehouse. Short goodreads review here.

36. The Ultimate Quotable Einstein (3, nf. Princeton University Press). Blog coverage here.

37. The Luck Stone (2, f). P. G. Wodehouse. Goodreads review here.

38. Company for Henry (4, f). P. G. Wodehouse.

39. Bachelors Anonymous (5, f). P. G. Wodehouse. A short book, but very funny.

40. The Second World War (5, nf.) Winston Churchill. Very long, the book is a thousand pages long abridgement of 6 different volumes written by Churchill. Blog coverage here, here, here, and here. I added this book to my list of favourite books on goodreads.

41. The Old Reliable (3, f). P. G. Wodehouse.

42. Performing Flea (4, m). P. G. Wodehouse, William Townend.

43. Decline and Fall (3, f). Evelyn Waugh. Goodreads review here.

44. The Devil’s Garden (2, f). W. B. Maxwell. Goodreads review here.

45. The Road to Little Dribbling: Adventures of an American in Britain (3, m). Bill Bryson.

46. Bryson’s Dictionary of Troublesome Words: A Writer’s Guide to Getting It Right (3, nf.). Bill Bryson. Goodreads review here.

47. The Life and Times of the Thunderbolt Kid (3, m). Goodreads review here.

48. Shakespeare: The World as Stage (2, m). Bill Bryson.

49. One Summer: America, 1927 (2, m). Bill Bryson. Goodreads review here.

50. The Sicilian (3, f). Mario Puzo.

51. Fools Die (3, f). Mario Puzo. Short goodreads review here.

52. Not George Washington (2, f). P. G. Wodehouse. Short goodreads review here.

53. Pre-Industrial Societies: Anatomy of the Pre-Modern World (5, nf. Oneworld Publications). Goodreads review here. I added this book to my list of favourite books on goodreads.

54. The Last Don (4, f). Mario Puzo. Goodreads review here.

55. Fear and Loathing in Las Vegas (f). Hunter S. Thompson. Goodreads review here.

56. Aunts Aren’t Gentlemen (3, f). P. G. Wodehouse.

57. What If?: Serious Scientific Answers to Absurd Hypothetical Questions (2, m. Randall Munroe). Short goodreads review here.

58. Wilt (5, f). Tom Sharpe. Goodreads review here.

59. The Wilt Alternative (4, f). Tom Sharpe. Very short goodreads review here.

60. Wilt On High (4, f). Tom Sharpe. Short goodreads review here.

61. Wilt In Nowhere (3, f). Tom Sharpe.

62. The Wilt Inheritance (3, f). Tom Sharpe. Goodreads review here.

63. Monstrous Regiment (3, f). Terry Pratchett.

64. Porterhouse Blue (3, f). Tom Sharpe. Goodreads review here.

65. The Midden (4, f). Tom Sharpe. Goodreads review here.

66. Human Drug Metabolism: An Introduction (5, nf. Wiley). Goodreads review here. Blog coverage here, here, and here.

67. Vintage Stuff (2, f). Tom Sharpe.

68. Blackadder: The Whole Damn Dynasty, 1485-1917 (5, f). Richard Curtis, Ben Elton, Rowan Atkinson & Jon Lloyd. Goodreads review here.

69. How the Endocrine System Works (2, nf. Wiley-Blackwell). Goodreads review here.

70. Suicide Prevention and New Technologies: Evidence Based Practice (1, nf. Palgrave Macmillan). Long(-ish) goodreads review here.

71. Blott on the Landscape (3, f). Tom Sharpe. Short goodreads review here.

72. Diabetes and the Metabolic Syndrome in Mental Health (2, nf. Lippincott Williams & Wilkins). Goodreads review here. Blog coverage here and here.

73. Palliative Care and End-of-Life Decisions (1, nf. Palgrave Pivot). Goodreads review here.

74. Ancestral Vices (4, f). Tom Sharpe. Goodreads review here.

75. Respirology (2, nf. Springer). Goodreads review here. Blog coverage here.

76. The Throwback (4, f). Tom Sharpe. Goodreads review here.

77. The Great Pursuit (3, f). Tom Sharpe.

78. Riotous Assembly (4, f). Tom Sharpe.

79. Indecent Exposure (3, f). Tom Sharpe.

80. Grantchester Grind (3, f). Tom Sharpe. Goodreads review here.

81. Time’s Arrow (4, f). Martin Amis. Short goodreads review here.

82. One Day in the Life of Ivan Denisovich (4, f). Aleksandr Solzhenitsyn. Short goodreads review here.

83. The Gropes (3, f). Tom Sharpe. Short goodreads review here.

84. The Old Devils (3, f). Kingsley Amis. Goodreads review here.

85. Me Talk Pretty One Day (5, m). David Sedaris. Goodreads review here.

86. Deserts: A Very Short Introduction (3, nf. Oxford University Press). Goodreads review here. Blog coverage here.

87. Naked (3, m). David Sedaris.

88. Holidays on Ice (2, m). David Sedaris. Short goodreads review here.

89. Dress Your Family in Corduroy and Denim (4, m). David Sedaris. Short goodreads review here.

90. Squirrel Seeks Chipmunk: A Modest Bestiary (3, m). David Sedaris. Short goodreads review here.

91. When You Are Engulfed in Flames (3, m). David Sedaris.

92. Barrel Fever (2, m). David Sedaris. Short goodreads review here.

93. Poor Richard’s Almanack (m). Benjamin Franklin.

94. Role of Biomarkers in Medicine (2, nf. InTech). Goodreads review here. Blog coverage here.

95. My Family and Other Animals (4, m). Gerard Durrell. Goodreads review here.

96. Birds, Beasts and Relatives (4, m). Gerard Durrell. Goodreads review here.

97. The Garden of the Gods (3, m). Gerard Durrell.

98. The Gun Seller (4, f). Hugh Laurie. Goodreads review here.

99. The Diary of a Nobody (1, m). George Grossmith. Goodreads review here.

100. The Thirteen Problems (2, f). Agatha Christie.

101. Dead Cert (4, f). Dick Francis.

102. Nerve (3, f). Dick Francis.

103. For Kicks (3, f). Dick Francis.

104. Odds Against (3, f). Dick Francis.

105. Flying Finish (2, f). Dick Francis. Short goodreads review here.

106. The Salmon of Doubt (4, m). Douglas Adams. Short goodreads review here.

107. Enquiry (3, f). Dick Francis. Very short goodreads review here.

108. Blood Sport (3, f). Dick Francis.

109. The Wisdom of Life and Counsels and Maxims (m). Arthur Schopenhauer. Goodreads review here.

110. Forfeit (2, f). Dick Francis.

111. Bonecrack (2, f). Dick Francis. Short goodreads review here.

112. Rat Race (4, f). Dick Francis. Goodreads review here.

113. Smokescreen (4, f). Dick Francis. A very short goodreads review here.

114. The Biology of Moral Systems (5, nf. Aldine Transaction). Goodreads review here. Blog coverage here. I added this book to my list of favourite books on goodreads.

115. Slay Ride (4, f). Dick Francis. Short goodreads review here.

116. Water Supply in Emergency Situations (2, nf. Springer). Goodreads review here. Blog coverage here and here.

117. High Stakes (4, f). Dick Francis.

118. In the Frame (3, f). Dick Francis.

119. Knockdown (3, f). Dick Francis.

120. Trial Run (2, f). Dick Francis. Short goodreads review here.

121. Managing Diabetic Nephropathies in Clinical Practice (4, nf. Springer). Very short goodreads review here. Blog coverage here.

122. Whip Hand (4, f). Dick Francis. Goodreads review here.

123. Risk (2, f). Dick Francis. Goodreads review here.

124. Reflex (3, f). Dick Francis. My long-ish goodreads review includes major spoilers.

125. The Ageing Immune System and Health (3, nf. Springer). Blog coverage here and here.

126. Twice Shy (2, f). Dick Francis. Goodreads review here. (I do discuss a few of the things that happen in the book in my review, but I don’t think it actually contains any spoilers).

127. The Danger (4, f). Dick Francis. In my goodreads review I noted that “this book is one of the best novels by Francis I’ve read.”

128. Banker (2, f). Dick Francis. Short goodreads review here.

129. Proof (2, f). Dick Francis.

130. Break In (3, f). Dick Francis.

131. Integrated Diabetes Care: A Multidisciplinary Approach (4, nf. Springer). Goodreads review here. Blog coverage here and here.

132. Bolt (4, f). Dick Francis. Very short goodreads review here.

133. The Edge (5, f). Dick Francis. Short goodreads review here.

134. Hot Money (2, f). Dick Francis. Goodreads review here.

135. Straight (3, f). Dick Francis. Goodreads review here.

136. Longshot (4, f). Dick Francis.

137. The Complete Yes Prime Minister (5, f). Jonathan Lynn and Antony Jay. Goodreads review here.

138. Neuroplasticity (4, nf. MIT Press). Goodreads review here.

139. Comeback (4, f). Dick Francis. My goodreads review includes major spoilers.

140. Driving Force (3, f). Dick Francis. Goodreads review here.

141. Decider (3, f). Dick Francis.

142. Essential Microbiology and Hygiene for Food Professionals (2, nf. CRC Press). Short goodreads review here.

143. Wild Horses (2, f). Dick Francis.

144. Come to Grief (4, f). Dick Francis.

145. To the Hilt (2, f). Dick Francis.

146. 10 lb penalty (2, f). Dick Francis. Short goodreads review here.

147. Second Wind (2, f). Dick Francis.

148. Shattered (2, f). Dick Francis. Goodreads review here.

149. Under Orders (4, f). Dick Francis.

150. To Say Nothing of the Dog (5, f). Connie Willis. Goodreads review here. I added this book to my list of favourite books on goodreads.

151. Diabetes and the Brain (5, nf. Humana Press). Goodreads review here. Blog coverage herehere, and here. I added this book to my list of favourite books on goodreads.

152. Doomsday Book (5, f). Connie Willis. Goodreads review here. I added this book to my list of favourite books on goodreads.

153. ABC of HIV and AIDS (2, nf. Bmj Publishing Group). Goodreads review here.

154. 100 Cases in Psychiatry (2, nf. CRC Press). Goodreads review here.

155. Fire Watch (2, f). Connie Willis. Goodreads review here.

156. Social Behaviour in Animals (3, nf. Springer). Goodreads review here.

Books I did not finish:

The Adventures of Huckleberry Finn (1, f). Mark Twain. Goodreads review here.

Lucky Jim (1, f). Kingsley Amis. Goodreads review here.

Raising Steam (?, f). Terry Pratchett. These days I mostly use Pratchett’s books as a treat, the few remaining books in the Discworld series which I have yet to read I consider to be books which I feel that I have to make myself deserve to be allowed to read. I started out reading this book because I felt terrible at the time, but I decided after having read a hundred pages or so that I had not in fact deserved to read the book, and so I put it away again. Unlike the two books above I do not consider this book to be bad, that’s not why I didn’t finish it.

Anna Karenina (?, f). Tolstoy. As I pointed out in my short review, “so far (I stopped around page 140) it’s been a story about miserable Russians, and I can’t read that kind of stuff right now.” Again, I would not say this book is bad, but I could not read that kind of stuff at the time.

The Language Instinct: How the Mind Creates Language (nf., Harper Perennial Modern Classics). Pinker’s book may be one of the last popular science books I’ll read, at least for a while – I find that I simply can’t read this kind of book anymore (which is annoying, because I also bought Jonathan Haidt’s The Righteous Mind this year, and I worry that I’ll never be able to read that book, despite the content being at least somewhat interesting, simply on account of the way the book is likely to be written). As I noted while reading the book, “I’ve realized by now that I’ve probably at this point grown to strongly dislike reading popular science books. I’ve disliked other PS books I’ve read in the semi-near past as well, but I always figured I had specific reasons for disliking a particular book. At this point it seems like it’s a general thing. I don’t like these books any more. Too imprecise language, claims are consistently way too strong, etc., etc..” My reading experience of Pinker’s book was definitely not improved by the fact that I have read textbooks on topics closely related to those covered in the book in the past (Eysenck and Keane, Snowling et al.).

Physiology at a Glance (?, Wiley-Blackwell). ‘Too much work, considering the pay-off’, would probably be the short version of why I didn’t finish this one – but this should not be taken as an indication that the book is bad. Despite the words ‘at a glance’ in the title, each short chapter (2 pages) in this book roughly matches the amount of material usually covered in an academic lecture (this is the general structure of the ‘at a glance books’), which means that the book takes quite a bit more work than the limited page count might indicate. The fact that I knew many of the things covered didn’t mean that the book was much faster to read than it otherwise might have been; it still took a lot of time and effort to digest the material. I’m sure there’s some stuff in the book which I don’t know and stuff I’ve forgot, and I did learn some new stuff from the chapters I did read, so I’m conflicted about whether or not to pick it up again later – it may be worth it at some point. However back when I was reading it I decided in the end to just put the book away and read something else instead. If you’re looking for a dense and to-the-point introduction to physiology/anatomy, I’m sure you could do a lot worse than this book.

100 Endgames You Must Know: Vital Lessons for Every Chess Player (?, nf. New in Chess). If I just wanted to be able to say that I had ‘read’ this book, I would have finished it a long time ago, but this is not the sort of book you just ‘read’. The positions covered need to be studied and analyzed in detail, the positions need to be played out, perhaps reviewed (depending on how ambitious you are about your chess). I’m more than half-way through (p. 140 or so), but I rarely feel like working on this stuff as it’s more fun to play chess than to systematically improve your chess in the manner you’ll do if you work on the material covered in this book. It’s a great endgame book, but it takes a lot of work.

January 1, 2017 Posted by | books, personal | Leave a comment

Diabetes and the Brain (II)

Here’s my first post about the book, which I recently finished – here’s my goodreads review. I added the book to my list of favourite books on goodreads, it’s a great textbook. Below some observations from the first few chapters of the book.

“Several studies report T1D [type 1 diabetes] incidence numbers of 0.1–36.8/100,000 subjects worldwide (2). Above the age of 15 years ketoacidosis at presentation occurs on average in 10% of the population; in children ketoacidosis at presentation is more frequent (3, 4). Overall, publications report a male predominance (1.8 male/female ratio) and a seasonal pattern with higher incidence in November through March in European countries. Worldwide, the incidence of T1D is higher in more developed countries […] After asthma, T1D is a leading cause of chronic disease in children. […]  twin studies show a low concordant prevalence of T1D of only 30–55%. […] Diabetes mellitus type 1 may be sporadic or associated with other autoimmune diseases […] The latter has been classified as autoimmune polyglandular syndrome type II (APS-II). APS-II is a polygenic disorder with a female preponderance which typically occurs between the ages of 20 and 40 years […] In clinical practice, anti-thyroxine peroxidase (TPO) positive hypothyroidism is the most frequent concomitant autoimmune disease in type 1 diabetic patients, therefore all type 1 diabetic patients should annually be screened for the presence of anti-TPO antibodies. Other frequently associated disorders are atrophic gastritis leading to vitamin B12 deficiency (pernicious anemia) and vitiligo. […] The normal human pancreas contains a superfluous amount of β-cells. In T1D, β-cell destruction therefore remains asymptomatic until a critical β-cell reserve is left. This destructive process takes months to years […] Only in a minority of type 1 diabetic patients does the disease begin with diabetic ketoacidosis, the majority presents with a milder course that may be mistaken as type 2 diabetes (7).”

“Insulin is the main regulator of glucose metabolism by stimulating glucose uptake in tissues and glycogen storage in liver and muscle and by inhibiting gluconeogenesis in the liver (11). Moreover, insulin is a growth factor for cells and cell differentiation, and acting as anabolic hormone insulin stimulates lipogenesis and protein synthesis. Glucagon is the counterpart of insulin and is secreted by the α-cells in the pancreatic islets in an inversely proportional quantity to the insulin concentration. Glucagon, being a catabolic hormone, stimulates glycolysis and gluconeogenesis in the liver as well as lipolysis and uptake of amino acids in the liver. Epinephrine and norepinephrine have comparable catabolic effects […] T1D patients lose the glucagon response to hypoglycemia after several years, when all β-cells are destructed […] The risk of hypoglycemia increases with improved glycemic control, autonomic neuropathy, longer duration of diabetes, and the presence of long-term complications (17) […] Long-term complications are prevalent in any population of type 1 diabetic patients with increasing prevalence and severity in relation to disease duration […] The pathogenesis of diabetic complications is multifactorial, complicated, and not yet fully elucidated.”

“Cataract is much more frequent in patients with diabetes and tends to become clinically significant at a younger age. Glaucoma is markedly increased in diabetes too.” (I was unaware of this).

“T1D should be considered as an independent risk factor for atherosclerosis […] An older study shows that the cumulative mortality of coronary heart disease in T1D was 35% by the age 55 (34). In comparison, the Framingham Heart Study showed a cardiovascular mortality of 8% of men and 4% of women without diabetes, respectively. […] Atherosclerosis is basically a systemic disease. Patients with one clinically apparent localization are at risk for other manifestations. […] Musculoskeletal disease in diabetes is best viewed as a systemic disorder with involvement of connective tissue. Potential pathophysiological mechanisms that play a role are glycosylation of collagen, abnormal cross-linking of collagen, and increased collagen hydration […] Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D. Dupuytren’s is characterized by thickening of the palmar fascia due to fibrosis with nodule formation and contracture, leading to flexion contractures of the digits, most commonly affecting the fourth and fifth digits. […] Foot problems in diabetes are common and comprise ulceration, infection, and gangrene […] The lifetime risk of a foot ulcer for diabetic patients is about 15% (42). […] Wound depth is an important determinant of outcome (46, 47). Deep ulcers with cellulitis or abscess formation often involve osteomyelitis. […] Radiologic changes occur late in the course of osteomyelitis and negative radiographs certainly do not exclude it.”

“Education of people with diabetes is a comprehensive task and involves teamwork by a team that comprises at least a nurse educator, a dietician, and a physician. It is, however, essential that individuals with diabetes assume an active role in their care themselves, since appropriate self-care behavior is the cornerstone of the treatment of diabetes.” (for much more on these topics, see Simmons et al.)

“The International Diabetes Federation estimates that more than 245 million people around the world have diabetes (4). This total is expected to rise to 380 million within 20 years. Each year a further 7 million people develop diabetes. Diabetes, mostly type 2 diabetes (T2D), now affects 5.9% of the world’s adult population with almost 80% of the total in developing countries. […] According to […] 2007 prevalence data […] [a]lmost 25% of the population aged 60 years and older had diabetes in 2007. […] It has been projected that one in three Americans born in 2000 will develop diabetes, with the highest estimated lifetime risk among Latinos (males, 45.4% and females, 52.5%) (6). A rise in obesity rates is to blame for much of the increase in T2D (7). Nearly two-thirds of American adults are overweight or obese (8). [my bold, US]

“In the natural history of progression to diabetes, β-cells initially increase insulin secretion in response to insulin resistance and, for a period of time, are able to effectively maintain glucose levels below the diabetic range. However, when β-cell function begins to decline, insulin production is inadequate to overcome the insulin resistance, and blood glucose levels rise. […] Insulin resistance, once established, remains relatively stable over time. […] progression of T2D is a result of worsening β-cell function with pre-existing insulin resistance.”

“Lifestyle modification (i.e., weight loss through diet and increased physical activity) has proven effective in reducing incident T2D in high-risk groups. The Da Qing Study (China) randomly allocated 33 clinics (557 persons with IGT) to 1 of 4 study conditions: control, diet, exercise, or diet plus exercise (23). Compared with the control group, the incidence of diabetes was reduced in the three intervention groups by 31, 46, and 42%, respectively […] The Finnish Diabetes Prevention Study evaluated 522 obese persons with IGT randomly allocated on an individual basis to a control group or a lifestyle intervention group […] During the trial, the incidence of diabetes was reduced by 58% in the lifestyle group compared with the control group. The US Diabetes Prevention Program is the largest trial of primary prevention of diabetes to date and was conducted at 27 clinical centers with 3,234 overweight and obese participants with IGT randomly allocated to 1 of 3 study conditions: control, use of metformin, or intensive lifestyle intervention […] Over 3 years, the incidence of diabetes was reduced by 31% in the metformin group and by 58% in the lifestyle group; the latter value is identical to that observed in the Finnish Study. […] Metformin is recommended as first choice for pharmacologic treatment [of type 2 diabetes] and has good efficacy to lower HbA1c […] However, most patients will eventually require treatment with combinations of oral medications with different mechanisms of action simultaneously in order to attain adequate glycemic control.”

CVD [cardiovascular disease, US] is the cause of 65% of deaths in patients with T2D (31). Epidemiologic studies have shown that the risk of a myocardial infarction (MI) or CVD death in a diabetic individual with no prior history of CVD is comparable to that of an individual who has had a previous MI (32, 33). […] Stroke is the second leading cause of long-term disability in high-income countries and the second leading cause of death worldwide. […] Stroke incidence is highly age-dependent. The median stroke incidence in persons between 15 and 49 years of age is 10 per 100,000 per year, whereas this is 2,000 per 100,000 for persons aged 85 years or older. […] In Western communities, about 80% of strokes are caused by focal cerebral ischemia, secondary to arterial occlusion, 15% by intracerebral hemorrhage, and 5% by subarachnoid hemorrhage (2). […] Patients with ischemic stroke usually present with focal neurological deficit of sudden onset. […] Common deficits include dysphasia, dysarthria, hemianopia, weakness, ataxia, sensory loss, and cognitive disorders such as spatial neglect […] Mild-to-moderate headache is an accompanying symptom in about a quarter of all patients with ischemic stroke […] The risk of symptomatic intracranial hemorrhage after thrombolysis is higher with more severe strokes and higher age (21). [worth keeping in mind when in the ‘I-am-angry-and-need-someone-to-blame-for-the-death-of-individual-X-phase’ – if the individual died as a result of the treatment, the prognosis was probably never very good to start with..] […] Thirty-day case fatality rates for ischemic stroke in Western communities generally range between 10 and 17% (2). Stroke outcome strongly depends not only on age and comorbidity, but also on the type and cause of the infarct. Early case fatality can be as low as 2.5% in patients with lacunar infarcts (7) and as high as 78% in patients with space-occupying hemispheric infarction (8).”

“In the previous 20 years, ten thousands of patients with acute ischemic stroke have participated in hundreds of clinical trials of putative neuroprotective therapies. Despite this enormous effort, there is no evidence of benefit of a single neuroprotective agent in humans, whereas over 500 have been effective in animal models […] the failure of neuroprotective agents in the clinic may […] be explained by the fact that most neuroprotectants inhibit only a single step in the broad cascade of events that lead to cell death (9). Currently, there is no rationale for the use of any neuroprotective medication in patients with acute ischemic stroke.”

“Between 5 and 10% of patients with ischemic stroke suffer from epileptic seizures in the first week and about 3% within the first 24 h […] Post-stroke seizures are not associated with a higher mortality […] About 1 out of every 11 patient with an early epileptic seizure develops epilepsy within 10 years after stroke onset (51) […] In the first 12 h after stroke onset, plasma glucose concentrations are elevated in up to 68% of patients, of whom more than half are not known to have diabetes mellitus (53). An initially high blood glucose concentration in patients with acute stroke is a predictor of poor outcome (53, 54). […] Acute stroke is associated with a blood pressure higher than 170/110 mmHg in about two thirds of patients. Blood pressure falls spontaneously in the majority of patients during the first week after stroke. High blood pressure during the acute phase of stroke has been associated with a poor outcome (56). It is unclear how blood pressure should be managed during the acute phase of ischemic stroke. […] routine lowering of the blood pressure is not recommended in the first week after stroke, except for extremely elevated values on repeated measurements […] Urinary incontinence affects up to 60% of stroke patients admitted to hospital, with 25% still having problems on hospital discharge, and around 15% remaining incontinent at 1 year. […] Between 22 and 43% of patients develop fever or subfebrile temperatures during the first days after stroke […] High body temperature in the first days after stroke is associated with poor outcome (42, 67). There is currently no evidence from randomized trials to support the routine lowering of body temperature above 37◦C.”

December 28, 2016 Posted by | books, diabetes, medicine, Neurology | Leave a comment

Diabetes and the brain (I)

I recently learned that the probability that I have brain-damage as a result of my diabetes is higher than I thought it was.

I first took note of the fact that there might be a link between diabetes and brain development some years ago, but this is a topic I knew very little about before reading the book I’m currently reading. Below I have added some relevant quotes from chapters 10 and 11 of the book:

“Cognitive decrements [in adults with type 1 diabetes] are limited to only some cognitive domains and can best be characterised as a slowing of mental speed and a diminished mental flexibility, whereas learning and memory are generally spared. […] the cognitive decrements are mild in magnitude […] and seem neither to be progressive over time, nor to be substantially worse in older adults. […] neuroimaging studies […] suggest that type 1 diabetic patients have relatively subtle reductions in brain volume but these structural changes may be more pronounced in patients with an early disease onset.”

“With the rise of the subspecialty area ‘medical neuropsychology’ […] it has become apparent that many medical conditions may […] affect the structure and function of the central nervous system (CNS). Diabetes mellitus has received much attention in that regard, and there is now an extensive literature demonstrating that adults with type 1 diabetes have an elevated risk of CNS anomalies. This literature is no longer limited to small cross-sectional studies in relatively selected populations of young adults with type 1 diabetes, but now includes studies that investigated the pattern and magnitude of neuropsychological decrements and the associated neuroradiological changes in much more detail, with more sensitive measurements, in both younger and older patients.”

“Compared to non-diabetic controls, the type 1 diabetic group [in a meta-analysis including 33 studies] demonstrated a significant overall lowered performance, as well as impairment in the cognitive domains intelligence, implicit memory, speed of information processing, psychomotor efficiency, visual and sustained attention, cognitive flexibility, and visual perception. There was no difference in explicit memory, motor speed, selective attention, or language function. […] These results strongly support the hypothesis that there is a relationship between cognitive dysfunction and type 1 diabetes. Clearly, there is a modest, but statistically significant, lowered cognitive performance in patients with type 1 diabetes compared to non-diabetic controls. The pattern of cognitive findings does not suggest decline in all cognitive domains, but is characterised by a slowing of mental speed and a diminished mental flexibility. Patients with type 1 diabetes seem to be less able to flexibly apply acquired knowledge in a new situation. […] In all, the cognitive problems we see in type 1 diabetes mimics the patterns of cognitive ageing. […] One of the problems with much of this research is that it is conducted in patients who are seen in specialised medical centres where care is very good. Other aspects of population selection may also have affected the results. Persons who participate in research projects that include a detailed work-up at a hospital tend to be less affected than persons who refuse participation. Possibly, specific studies that recruit type 1 adults from the community, with individuals being in poorer health, would result in greater cognitive deficits”.

“[N]eurocognitive research suggests that type 1 diabetes is primarily associated with psychomotor slowing and reductions in mental efficiency. This pattern is more consistent with damage to the brain’s white matter than with grey-matter abnormalities. […] A very large neuroimaging literature indicates that adults with either type 1 or type 2 diabetes manifest structural changes in a number of brain regions […]. MRI changes in the brain of patients with type 1 diabetes are relatively subtle. In terms of effect sizes, these are at best large enough to distinguish the patient group from the control group, but not large enough to classify an individual subject as being patient or control.”

“[T]he subtle cognitive decrements in speed of information processing and mental flexibility found in diabetic patients are not merely caused by acute metabolic derangements or psychological factors, but point to end-organ damage in the central nervous system. Although some uncertainty remains about the exact pathogenesis, several mechanisms through which diabetes may affect the brain have now been identified […] The issue whether or not repeated episodes of severe hypoglycaemia result in permanent mild cognitive impairment has been debated extensively in the literature. […] The meta-analysis on the effect of type 1 diabetes on cognition (1) does not support the idea that there are important negative effects from recurrent episodes of severe hypoglycaemia on cognitive functioning, and large prospective studies did not confirm the earlier observations […] there is no evidence for a linear relationship between recurrent episodes of hypoglycaemia and permanent brain dysfunction in adults. […] Cerebral microvascular pathology in diabetes may result in a decrease of regional cerebral blood flow and an alteration in cerebral metabolism, which could partly explain the occurrence of cognitive impairments. It could be hypothesised that vascular pathology disrupts white-matter integrity in a way that is akin to what one sees in peripheral neuropathy and as such could perhaps affect the integrity of neurotransmitter systems and as a consequence limits cognitive efficiency. These effects are likely to occur diffusely across the brain. Indeed, this is in line with MRI findings and other reports.”

“[An] important issue is the interaction between different disease variables. In particular, patients with diabetes onset before the age of 5 […] and patients with advanced microangiopathy might be more sensitive to the effects of hypoglycaemic episodes or elevated HbA1c levels. […] decrements in cognitive function have been observed as early as 2 years after the diagnosis (63). It is important to consider the possibility that the developing brain is more vulnerable to the effect of diabetes […] Diabetes has a marked effect on brain function and structure in children and adolescents. As a group, diabetic children are more likely to perform more poorly than their nondiabetic peers in the classroom and earn lower scores on measures of academic achievement and verbal intelligence. Specialized neuropsychological testing reveals evidence of dysfunction in a variety of cognitive domains, including sustained attention, visuoperceptual skills, and psychomotor speed. Children diagnosed early in life – before 7 years of age – appear to be most vulnerable, showing impairments on virtually all types of cognitive tests, with learning and memory skills being particularly affected. Results from neurophysiological, cerebrovascular, and neuroimaging studies also show evidence of CNS anomalies. Earlier research attributed diabetes-associated brain dysfunction to episodes of recurrent hypoglycemia, but more recent studies have generally failed to find strong support for that view.”

“[M]ethodological issues notwithstanding, extant research on diabetic children’s brain function has identified a number of themes […]. All other things being equal, children diagnosed with type 1 diabetes early in life – within the first 5–7 years of age – have the greatest risk of manifesting neurocognitive dysfunction, the magnitude of which is greater than that seen in children with a later onset of diabetes. The development of brain dysfunction seems to occur within a relatively brief period of time, often appearing within the first 2–3 years following diagnosis. It is not limited to performance on neuropsychological tests, but is manifested on a wide range of electrophysiological measures as marked neural slowing. Somewhat surprisingly, the magnitude of these effects does not seem to worsen appreciably with increasing duration of diabetes – at least through early adulthood. […] As a group, diabetic children earn somewhat lower grades in school as compared to their nondiabetic classmates, are more likely to fail or repeat a grade, perform more poorly on formal tests of academic achievement, and have lower IQ scores, particularly on tests of verbal intelligence.”

The most compelling evidence for a link between diabetes and poorer school outcomes has been provided by a Swedish population-based register study involving 5,159 children who developed diabetes between July 1997 and July 2000 and 1,330,968 nondiabetic children […] Those who developed diabetes very early in life (diagnosis before 2 years of age) had a significantly increased risk of not completing school as compared to either diabetic patients diagnosed after that age or to the reference population. Small, albeit statistically reliable between-group differences were noted in school marks, with diabetic children, regardless of age at diagnosis, consistently earning somewhat lower grades. Of note is their finding that the diabetic sample had a significantly lower likelihood of getting a high mark (passed with distinction or excellence) in two subjects and was less likely to take more advanced courses. The authors conclude that despite universal access to active diabetes care, diabetic children – particularly those with a very early disease onset – had a greatly increased risk of somewhat lower educational achievement […] Similar results have been reported by a number of smaller studies […] in the prospective Melbourne Royal Children’s Hospital (RCH) cohort study (22), […] only 68% of [the] diabetic sample completed 12 years of school, as compared to 85% of the nondiabetic comparison group […] Children with diabetes, especially those with an earlier onset, have also been found to require more remedial educational services and to be more likely to repeat a grade (25–28), to earn lower school grades over time (29), to experience somewhat greater school absenteeism (28, 30–32), to have a two to threefold increase in rates of depression (33– 35), and to manifest more externalizing behavior problems (25).”

“Children with diabetes have a greatly increased risk of manifesting mild neurocognitive dysfunction. This is an incontrovertible fact that has emerged from a large body of research conducted over the past 60 years […]. There is, however, less agreement about the details. […] On standardized tests of academic achievement, diabetic children generally perform somewhat worse than their healthy peers […] Performance on measures of verbal intelligence – particularly those that assess vocabulary knowledge and general information about the world – is frequently compromised in diabetic children (9, 14, 26, 40) and in adults (41) with a childhood onset of diabetes. The few studies that have followed subjects over time have noted that verbal IQ scores tend to decline as the duration of diabetes increases (13, 15, 29). These effects appear to be more pronounced in boys and in those children with an earlier onset of diabetes. Whether this phenomenon is a marker of cognitive decline or whether it reflects a delay in cognitive development cannot yet be determined […] it is possible, but remains unproven, that psychosocial processes (e.g., school absence, depression, distress, externalizing problems) (42), and/or multiple and prolonged periods of classroom inattention and reduced motivation secondary to acute and prolonged episodes of hypoglycemia (43–45) may be contributing to the poor academic outcomes characteristic of children with diabetes. Although it may seem more reasonable to attribute poorer school performance and lower IQ scores to diabetes-associated disruption of specific neurocognitive processes (e.g., attention, learning, memory) secondary to brain dysfunction, there is little compelling evidence to support that possibility at the present time.”

“Children and adults who develop diabetes within the first 5–7 years of life may show moderate cognitive dysfunction that can affect all cognitive domains, although the specific pattern varies, depending both on the cognitive domain assessed and on the child’s age at assessment. Data from a recent meta-analysis of 19 pediatric studies have indicated that effect sizes tend to range between ∼ 0.4 and 0.5 for measures of learning, memory, and attention, but are lower for other cognitive domains (47). For the younger child with an early onset of diabetes, decrements are particularly pronounced on visuospatial tasks that require copying complex designs, solving jigsaw puzzles, or using multi-colored blocks to reproduce designs, with girls more likely to earn lower scores than boys (8). By adolescence and early adulthood, gender differences are less apparent and deficits occur on measures of attention, mental efficiency, learning, memory, eye–hand coordination, and “executive functioning” (13, 26, 40, 48–50). Not only do children with an early onset of diabetes often – but not invariably – score lower than healthy comparison subjects, but a subset earn scores that fall into the “clinically impaired” range […]. According to one estimate, the prevalence of clinically significant impairment is approximately four times higher in those diagnosed within the first 6 years of life as compared to either those diagnosed after that age or to nondiabetic peers (25 vs. 6%) (49). Nevertheless, it is important to keep in mind that not all early onset diabetic children show cognitive dysfunction, and not all tests within a particular cognitive domain differentiate diabetic from nondiabetic subjects.”

“Slowed neural activity, measured at rest by electroencephalogram (EEG) and in response to sensory stimuli, is common in children with diabetes. On tests of auditory- or visual-evoked potentials (AEP; VEP), children and adolescents with more than a 2-year history of diabetes show significant slowing […] EEG recordings have also demonstrated abnormalities in diabetic adolescents in very good metabolic control. […] EEG abnormalities have also been associated with childhood diabetes. One large study noted that 26% of their diabetic subjects had abnormal EEG recordings, as compared to 7% of healthy controls […] diabetic children with EEG abnormalities recorded at diagnosis may be more likely to experience a seizure or coma (i.e., a severe hypoglycemic event) when blood glucose levels subsequently fall […] This intriguing possibility – that seizures occur in some diabetic children during hypoglycemia because of the presence of pre-existing brain dysfunction – requires further study.” 

“A very large body of research on adults with diabetes now demonstrates that the risk of developing a wide range of neurocognitive changes – poorer cognitive function, slower neural functioning, abnormalities in cerebral blood flow and brain metabolites, and reductions or alterations in gray and white-brain matter – is associated with chronically elevated blood glucose values […] Taken together, the limited animal research on this topic […] provides quite compelling support for the view that even relatively brief bouts of chronically elevated blood glucose values can induce structural and functional changes to the brain. […] [One pathophysiological model proposed is] the “diathesis” or vulnerability model […] According to this model, in the very young child diagnosed with diabetes, chronically elevated blood glucose levels interfere with normal brain maturation at a time when those neurodevelopmental processes are particularly labile, as they are during the first 5–7 years of life […]. The resulting alterations in brain organization that occur during this “sensitive period” will not only lead to delayed cognitive development and lasting cognitive dysfunction, but may also induce a predisposition or diathesis that increases the individual’s sensitivity to subsequent insults to the brain, as could be initiated by the prolonged neuroglycopenia that occurs during an episode of hypoglycemia. Data from most, but not all, research are consistent with that view. […] Research is only now beginning to focus on plausible pathophysiological mechanisms.”

After having read these chapters, I’m now sort-of-kind-of wondering to which extent my autism was/is also at least partly diabetes-mediated. There’s no evidence linking autism and diabetes presented in the chapters, but you do start to wonder even so – the central nervous system is complicated.. If diabetes did play a role there, that would probably be an argument for not considering potential diabetes-mediated brain changes in me as ‘minor’ despite my somewhat higher than average IQ (just to be clear, a high observed IQ in an individual does not preclude the possibility that diabetes had a negative IQ-effect – we don’t observe the counterfactual – but a high observed IQ does make a potential IQ-lowering effect less likely to have happened, all else equal).

December 21, 2016 Posted by | books, diabetes, medicine, Neurology, personal | Leave a comment

Integrated Diabetes Care (II)

Here’s my first post about the book. In this post I’ll provide some coverage of the last half of the text.

Some stuff from the chapters dealing with the UK:

“we now know that reducing the HbA1c too far and fast in some patients can be harmful [7]. This is a particularly important issue, where primary care is paid through the Quality Outcomes Framework (QoF), a general practice “pay for performance” programme [8]. A major item within QoF, is the proportion of patients below HbA1c criteria: such reporting is not linked to rates of hypoglycaemia, ambulance call outs or hospitalisation, i.e., a practice could receive a high payment through achieving the QoF target, but with a high hospitalisation/ambulance callout rate.”

“nationwide audit data for England 2009–2010 showed that […] targets for HbA1c (≤7.5%/58.5 mmol/mol), blood pressure (BP) (<140/80 mmHg) and total cholesterol (<4.0 mmol/l) were achieved in only 67 %, 69% and 41 % of people with T2D.”

One thing that is perhaps worth noting here before moving any further is that the fact that you have actual data on this stuff is in itself indicative of an at least reasonable standard of care, compared to many places; in a lot of countries you just don’t have data on this kind of stuff, and it seems highly unlikely to me that the default assumption should be that things are going great in places where you do not have data on this kind of thing. Denmark also, incidentally, has a similar audit system, the results of which I’ve discussed in some detail before here on the blog).

“Our local audit data shows that approximately 85–90 % of patients with diabetes are managed by GPs and practice nurses in Coventry and Warwickshire. Only a small proportion of newly diagnosed patients with T2D (typically around 5–10 %) who attend the DESMOND (Diabetes Education and Self-Management for Ongoing and Newly Diagnosed) education programme come into contact with some aspect of the specialist services [12]. […] Payment by results (PBR) has […] actively, albeit indirectly, disincentivised primary care to seek opinion from specialist services [13]. […] Large volumes of data are collected by various services ranging between primary care, local laboratory facilities, ambulance services, hospital clinics (of varying specialties), retinal screening services and several allied healthcare professionals. However, the majority of these systems are not unified and therefore result in duplication of data collection and lack of data utilisation beyond the purpose of collection. This can result in missed opportunities, delayed communication, inability to use electronic solutions (prompts, alerts, algorithms etc.), inefficient use of resources and patient fatigue (repeated testing but no apparent benefit). Thus, in the majority of the regions in England, the delivery of diabetes care is disjointed and lacks integration. Each service collects and utilises data for their own “narrow” purpose, which could be used in a holistic way […] Potential consequences of the introduction of multiple service providers are fragmentation of care, reductions in continuity of care and propagation of a reluctance to refer on to a more specialist service [9]. […] There are calls for more integration and less fragmentation in health-care [30], yet so far, the major integration projects in England have revealed negligible, if any, benefits [25, 32]. […] to provide high quality care and reduce the cost burden of diabetes, any integrated diabetes care models must prioritise prevention and early aggressive intervention over downstream interventions (secondary and tertiary prevention).”

“It is estimated that 99 % of diabetes care is self-management […] people with diabetes spend approximately only 3 h a year with healthcare professionals (versus 8757 h of self-management)” [this is a funny way of looking at things, which I’d never really considered before.]

“In a traditional model of diabetes care the rigid divide between primary and specialist care is exacerbated by the provision of funding. For example the tariff system used in England, to pay for activity in specialist care, can create incentives for one part of the system to “hold on” to patients who might be better treated elsewhere. This system was originally introduced to incentivise providers to increase elective activity and reduce waiting times. Whilst it has been effective for improving access to planned care, it is not so well suited to achieving the continuity of care needed to facilitate integrated care [37].”

“Currently in the UK there is a miss-match between what the healthcare policies require and what the workforce is actually being trained for. […]  For true integrated care in diabetes and the other long term condition specialties to work, the education and training needs for both general practitioners and hospital specialists need to be more closely aligned.”

The chapter on Germany (Baden-Württemberg):

“An analysis of the Robert Koch-Institute (RKI) from 2012 shows that more than 50 % of German people over 65 years suffer from at least one chronic disease, approximately 50 % suffer from two to four chronic diseases, and over a quarter suffer from five or more diseases [3]. […] Currently the public sector covers the majority (77 %) of health expenditures in Germany […] An estimated number of 56.3 million people are living with diabetes in Europe [16]. […]  The mean age of the T2DM-cohort [from Kinzigtal, Germany] in 2013 was 71.2 years and 53.5 % were women. In 2013 the top 5 co-morbidities of patients with T2DM were essential hypertension (78.3 %), dyslipidaemia (50.5 %), disorders of refraction and accommodation (38.2 %), back pain (33.8 %) and obesity (33.3 %). […] T2DM in Kinzigtal was associated with mean expenditure of 5,935.70 € per person in 2013 (not necessarily only for diabetes care ) including 40 % from inpatient stays, 24 % from drug prescriptions, 19 % from physician remuneration in ambulatory care and the rest from remedies and adjuvants (e.g., insulin pen systems, wheelchairs, physiotherapy, etc.), work incapacity or rehabilitation.”

-ll- Netherlands:

“Zhang et al. [10] […] reported that globally, 12 % of health expenditures […] per person were spent on diabetes in 2010. The expenditure varies by region, age group, gender, and country’s income level.”

“Over the years many approaches [have been] introduced to improve the quality and continuity of care for chronic diseases. […] the Dutch minister of health approved, in 2007, the introduction of bundled-care (known is the Netherlands as a ‘chain-of-care’) approach for integrated chronic care, with special attention to diabetes. […] With a bundled payment approach – or episode-based payment – multiple providers are reimbursed a single sum of money for all services related to an episode of care (e.g., hospitalisation, including a period of post-acute care). This is in contrast to a reimbursement for each individual service (fee-for-service), and it is expected that this will reduce the volume of services provided and consequently lead to a reduction in spending. Since in a fee-for-service system the reimbursement is directly related to the volume of services provided, there is little incentive to reduce unnecessary care. The bundled payment approach promotes [in theory… – US] a more efficient use of services [26] […] As far as efficiency […] is concerned, after 3 years of evaluation, several changes in care processes have been observed, including task substitution from GPs to practice nurses and increased coordination of care [31, 36], thus improving process costs. However, Elissen et al. [31] concluded that the evidence relating to changes in process and outcome indicators, remains open to doubt, and only modest improvements were shown in most indicators. […] Overall, while the Dutch approach to integrated care, using a bundled payment system with a mixed payer approach, has created a limited improvement in integration, there is no evidence that the approach has reduced morbidity and premature mortality: and it has come at an increased cost.”

-ll- Sweden:

“In 2013 Sweden spent the equivalent of 4,904 USD per capita on health [OECD average: 3,453 USD], with 84 % of the expenditure coming from public sources [OECD average: 73 %]. […] Similarly high proportions [of public spending] can be found in the Netherlands (88 %), Norway (85 %) and Denmark (84 %) [11]. […] Sweden’s quality registers, for tracking the quality of care that patients receive and the corresponding outcomes for several conditions, are among the most developed across the OECD [17]. Yet, the coordination of care for patients with complex needs is less good. Only one in six patients had contact with a physician or specialist nurse after discharge from hospital for stroke, again with substantial variation across counties. Fewer than half of patients with type 1 diabetes […] have their blood pressure adequately controlled, with a considerable variation (from 26 % to 68 %) across counties [17]. […] at 260 admissions per 100,000 people aged over 80, avoidable hospital admissions for uncontrolled diabetes in Sweden’s elderly population are the sixth highest in the OECD, and about 1.5 times higher than in Denmark.”

“Waiting times [in Sweden] have long been a cause of dissatisfaction [19]. In an OECD ranking of 2011, Sweden was rated second worst [20]. […] Sweden introduced a health-care guarantee in 2005 [guaranteeing fast access in some specific contexts]. […] Most patients who appeal under the health-care guarantee and [are] prioritised in the “queue” ha[ve] acute conditions rather than medical problems as a consequence of an underlying chronic disease. Patients waiting for a hip replacement or a cataract surgery are cured after surgery and no life-long follow-up is needed. When such patients are prioritised, the long-term care for patients with chronic diseases is “crowded out,” lowering their priority and risking worse outcomes. The health-care guarantee can therefore lead to longer intervals between checkups, with difficulties in accessing health care if their pre-existing condition has deteriorated.”

“Within each region / county council the care of patients with diabetes is divided. Patients with type 1 diabetes get their care at specialist clinics in hospitals and the majority of patients with type 2 diabetes in primary care . Patients with type 2 diabetes who have severe complications are referred to the Diabetes Clinics at the hospital. Approximately 10 % of all patients with type 2 continue their care at the hospital clinics. They are almost always on insulin in high doses often in combination with oral agents but despite massive medication many of these patients have difficulties to achieve metabolic balance. Patients with advanced complications such as foot ulcers, macroangiopathic manifestations and treatment with dialysis are also treated at the hospitals.”

Do keep in mind here that even if only 10% of type 2 patients are treated in a hospital setting, type 2 patients may still make up perhaps half or more of the diabetes patients treated in a hospital setting; type 2 prevalence is much, much higher than type 1 prevalence. Also, in view of such treatment- and referral patterns the default assumption when doing comparative subgroup analyses should always be that the outcomes of type 2 patients treated in a hospital setting should be expected to be much worse than the outcomes of type 2 patients treated in general practice; they’re in much poorer health than the diabetics treated in general practice, or they wouldn’t be treated in a hospital setting in the first place. A related point is that regardless of how great the hospitals are at treating the type 2 patients (maybe in some contexts there isn’t actually much of a difference in outcomes between these patients and type 2 patients treated in general practice, even though you’d expect there to be one?), that option will usually not be scalable. Also, it’s to be expected that these patients are more expensive than the default type 2 patient treated by his GP [and they definitely are: “Only if severe complications arise [in the context of a type 2 patient] is the care shifted to specialised clinics in hospitals. […] these patients have the most expensive care due to costly treatment of for example foot ulcers and renal insufficiency”]; again, they’re sicker and need more comprehensive care. They would need it even if they did not get it in a hospital setting, and there are costs associated with under-treatment as well.

“About 90 % of the children [with diabetes in Sweden] are classified as having Type 1 diabetes based on positive autoantibodies and a few percent receive a diagnosis of “Maturity Onset Diabetes of the Young” (MODY) [39]. Type 2 diabetes among children is very rare in Sweden.”

Lastly, some observations from the final chapter:

“The paradox that we are dealing with is that in spite of health professionals wanting the best for their patients on a patient by patient basis, the way that individuals and institutions are organised and paid, directly influences the clinical decisions that are made. […] Naturally, optimising personal care and the provider/purchaser-commissioner budget may be aligned, but this is where diabetes poses substantial problems from a health system point of view: The majority of adverse diabetes outcomes […] are many years in the future, so a system based on this year’s budget will often not prioritise the future […] Even for these adverse “diabetes” outcomes, other clinical factors contribute to the end result. […]  attribution to diabetes may not be so obvious to those seeking ways to minimise expenditure.”

[I incidentally tried to get this point across in a recent discussion on SSC, but I’m not actually sure the point was understood, presumably because I did not explain it sufficiently clearly or go into enough detail. It is my general impression, on a related note, that many people who would like to cut down on the sort of implicit public subsidization of unhealthy behaviours that most developed economies to some extent engage in these days do not understand well enough the sort of problems that e.g. the various attribution problems and how to optimize ‘post-diagnosis care’ (even if what you want to optimize is the cost minimization function…) cause in specific contexts. As I hope my comments indicate in that thread, I don’t think these sorts of issues can be ignored or dealt with in some very simple manner – and I’m tempted to say that if you think they can, you don’t know enough about these topics. I say that as one of those people who would like people who engage in risky behaviours to pay a larger (health) risk premium than they currently do].

[Continued from above, …problems from a health system point of view:]
“Payment for ambulatory diabetes care , which is essentially the preventative part of diabetes care, usually sits in a different budget to the inpatient budget where the big expenses are. […] good evidence for reducing hospitalisation through diabetes integrated care is limited […] There is ample evidence [11, 12] where clinicians own, and profit from, other services (e.g., laboratory, radiology), that referral rates are increased, often inappropriately […] Under the English NHS, the converse exists, where GPs, either holding health budgets, or receiving payments for maintaining health budgets [13], reduce their referrals to more specialist care. While this may be appropriate in many cases, it may result in delays and avoidance of referrals, even when specialist care is likely to be of benefit. [this would be the under-treatment I was talking about above…] […] There is a mantra that fragmentation of care and reductions in continuity of care are likely to harm the quality of care [14], but hard evidence is difficult to obtain.”

“The problems outlined above, suggest that any health system that fails to take account of the need to integrate the payment system from both an immediate and long term perspective, must be at greater risk of their diabetes integration attempts failing and/or being unsustainable. […] There are clearly a number of common factors and several that differ between successful and less successful models. […] Success in these models is usually described in terms of hospitalisation (including, e.g., DKA, amputation, cardiovascular disease events, hypoglycaemia, eye disease, renal disease, all cause), metabolic outcomes (e.g., HbA1c ), health costs and access to complex care. Some have described patient related outcomes, quality of life and other staff satisfaction, but the methodology and biases have often not been open to scrutiny. There are some methodological issues that suggest that many of those with positive results may be illusory and reflect the pre-existing landscape and/or wider changes, particular to that locality. […] The reported “success” of intermediate diabetes clinics run by English General Practitioners with a Special Interest led to extension of the model to other areas. This was finally tested in a randomised controlled trial […] and shown to be a more costly model with no real benefit for patients or the system. Similarly in East Cambs and Fenland, the 1 year results suggested major reductions in hospitalisation and costs in practices participating fully in the integrated care initiative, compared with those who “engaged” later [9]. However, once the trends in neighbouring areas and among those without diabetes were accounted for, it became clear that the benefits originally reported were actually due to wider hospitalisation reductions, not just in those with diabetes. Studies of hospitalisation /hospital costs that do not compare with rates in the non-diabetic population need to be interpreted with caution.”

“Kaiser Permanente is often described as a great diabetes success story in the USA due to its higher than peer levels of, e.g., HbA1c testing [23]. However, in the 2015 HEDIS data, levels of testing, metabolic control achieved and complication rates show quality metrics lower than the English NHS, in spite of the problems with the latter [23]. Furthermore, HbA1c rates above 9 % remain at approximately 20 %, in Southern California [24] or 19 % in Northern California [25], a level much higher than that in the UK […] Similarly, the Super Six model […] has been lauded as a success, as a result of reductions in patients with, e.g., amputations. However, these complications were in the bottom quartile of performance for these outcomes in England [26] and hence improvement would be expected with the additional diabetes resources invested into the area. Amputation rates remain higher than the national average […] Studies showing improvement from a low baseline do not necessarily provide a best practice model, but perhaps a change from a system that required improvement. […] Several projects report improvements in HbA1c […] improvements in HbA1c, without reports of hypoglycaemia rates and weight gain, may be associated with worse outcomes as suggested from the ACCORD trial [28].”

December 18, 2016 Posted by | books, diabetes, economics, medicine | Leave a comment

The Ageing Immune System and Health (I)

as we age, we observe a greater heterogeneity of ability and health. The variation in, say, walking speed is far greater in a group of 70 year olds, than in a group on 20 year olds. This makes the study of ageing and the factors driving that heterogeneity of health and functional ability in old age vital. […] The study of the immune system across the lifespan has demonstrated that as we age the immune system undergoes a decline in function, termed immunosenescence. […] the decline in function is not universal across all aspects of the immune system, and neither is the magnitude of functional loss similar between individuals. The theory of inflammageing, which represents a chronic low grade inflammatory state in older people, has been described as a major consequence of immunosenescence, though lifestyle factors such as reduced physical activity and increased adiposity also play a major role […] In poor health, older people accumulate disease, described as multimorbidity. This in turn means traditional single system based health care becomes less valid as each system affected by disease impacts on other systems. This leads some older people to be at greater risk of adverse events such as disability and death. The syndrome of this increased vulnerability is described as frailty, and increasing fundamental evidence is emerging that suggests immunosenescence and inflammageing may underpin frailty […] Thus frailty is seen as one clinical manifestation of immunosenescence.”

The above quotes are from the book‘s preface. I gave it 3 stars on goodreads. I should probably, considering that this topic is mentioned in the preface, mention explicitly that the book doesn’t actually go into a lot of details about the downsides of ‘traditional single system based health care’; the book is mainly about immunology and related topics, and although it provides coverage of intervention studies etc., it doesn’t really provide detailed coverage about issues like the optimization of organizational structures/systems analysis etc.. The book I was currently reading while I started out writing this post – Integrated Diabetes Care – A Multidisciplinary Approach (blog coverage here) – is incidentally pretty much exclusively devoted to providing coverage of these sorts of topics (and it did a fine job).

If you have never read any sort of immunology text before the book will probably be unreadable to you – “It is aimed at fundamental scientists and clinicians with an interest in ageing or the immune system.” In my coverage below I have not made any efforts towards picking out quotes which would be particularly easy for the average reader to read and understand; this is another way of saying that the post is mainly written for my own benefit, perhaps even more so than is usually the case, not for the benefit of potential readers reading along here.

“Physiological ageing is associated with significant re-modelling of the immune system. Termed immunosenescence, age-related changes have been described in the composition, phenotype and function of both the innate and adaptive arms of the immune system. […] Neutrophils are the most abundant leukocyte in circulation […] The first step in neutrophil anti-microbial defence is their extravasation from the bloodstream and migration to the site of infection. Whilst age appears to have no effect upon the speed at which neutrophils migrate towards chemotactic signals in vitro [15], the directional accuracy of neutrophil migration to inflammatory agonists […] as well as bacterial peptides […] is significantly reduced [15]. […] neutrophils from older adults clearly exhibit defects in several key defensive mechanisms, namely chemotaxis […], phagocytosis of opsonised pathogens […] and NET formation […]. Given this near global impairment in neutrophil function, alterations to a generic signalling element rather than defects in molecules specific to each anti-microbial defence strategy is likely to explain the aberrations in neutrophil function that occur with age. In support of this idea, ageing in rodents is associated with a significant increase in neutrophil membrane fluidity, which coincides with a marked reduction in neutrophil function […] ageing results in a reduction in NK cell production and proliferation […] Numerous studies have examined the impact of age […], with the general consensus that at the single cell level, NK cell cytotoxicity (NKCC) is reduced with age […] retrospective and prospective studies have reported relationships between low NK cell activity in older adults and (1) a past history of severe infection, (2) an increased risk of future infection, (3) a reduced probability of surviving infectious episodes and (4) infectious morbidity [49–51]. Related to this increased risk of infection, reduced NKCC prior to and following influenza vaccination in older adults has been shown to be associated with reduced protective anti-hemagglutinin titres, worsened health status and an increased incidence of respiratory tract infection […] Whilst age has no effect upon the frequency or absolute number of monocytes [54, 55], the composition of the monocyte pool is markedly different in older adults, who present with an increased frequency of non-classical and intermediate monocytes, and fewer classical monocytes when compared to their younger counterparts”.

“Via their secretion of growth factors, pro-inflammatory cytokines, and proteases, senescent cells compromise tissue homeostasis and function, and their presence has been causally implicated in the development of such age-associated conditions as sarcopenia and cataracts [92]. Several studies have demonstrated a role for innate immune cells in the recognition and clearance of senescent cells […] ageing is associated with a low-grade systemic up-regulation of circulating inflammatory mediators […] Results from longitudinal-based studies suggest inflammageing is deleterious to human health with studies in older cohorts demonstrating that low-grade increases in the circulating levels of TNF-α [103], IL-6 […] and CRP [105] are associated with both all-cause […] and cause-specific […] mortality. Furthermore, inflammageing is a predictor of frailty [106] and is considered a major factor in the development of several age-related pathologies, such as atherosclerosis [107], Alzheimer’s disease [100] and sarcopenia [108].”

“Persistent viral infections, reduced vaccination responses, increased autoimmunity, and a rise in inflammatory syndromes all typify immune ageing. […] These changes can be in part attributed to the accumulation of highly differentiated senescent T cells, characterised by their decreased proliferative capacity and the activation of senescence signaling pathways, together with alterations in the functional competence of regulatory cells, allowing inflammation to go unchecked. […] Immune senescence results from defects in different leukocyte populations, however the dysfunction is most profound in T cells [6, 7]. The responses of T cells from aged individuals are typically slower and of a lower magnitude than those of young individuals […] while not all equally affected by age, the overall T cell number does decline dramatically as a result of thymic atrophy […] T cell differentiation is a highly complex process controlled not only by costimulation but also by the strength and duration of T cell receptor (TCR) signalling [34]. Nearly all TCR signalling pathways have been found altered during ageing […] two phenotypically distinct subsets of B cells […] have been demonstrated to exert immunosuppressive functions. The frequency and function of both these Breg subsets declines with age”.

“The immune impairments in patients with chronic hyperglycemia resemble those seen during ageing, namely poor control of infections and reduced vaccination response [99].” [This is hardly surprising. ‘Hyperglycemia -> accelerated ageing’ seems generally to be a good (over-)simplified model in many contexts. To give another illustrative example from Czernik & Fowlkes text, “approximately 4–6 years of diabetes exposure in some children may be sufficient to increase skin AGEs to levels that would naturally accumulate only after ~25 years of chronological aging”].

“The term “immunosenescence” is commonly taken to mean age-associated changes in immune parameters hypothesized to contribute to increased susceptibility and severity of the older adult to infectious disease, autoimmunity and cancer. In humans, it is characterized by lower numbers and frequencies of naïve T and B cells and higher numbers and frequencies of late-differentiated T cells, especially CD8+ T cells, in the peripheral blood. […] Low numbers of naïve cells render the aged highly susceptible to pathogens to which they have not been previously exposed, but are not otherwise associated with an “immune risk profile” predicting earlier mortality. […] many of the changes, or most often, differences, in immune parameters of the older adult relative to the young have not actually been shown to be detrimental. The realization that compensatory changes may be developing over time is gaining ground […] Several studies have now shown that lower percentages and absolute numbers of naïve CD8+ T cells are seen in all older subjects whereas the accumulation of very large numbers of CD8+ late-stage differentiated memory cells is seen in a majority but not in all older adults [2]. The major difference between this majority of subjects with such accumulations of memory cells and those without is that the former are infected with human herpesvirus 5 (Cytomegalovirus, CMV). Nevertheless, the question of whether CMV is associated with immunosenescence remains so far uncertain as no causal relationship has been unequivocally established [5]. Because changes are seen rapidly after primary infection in transplant patients [6] and infants [7], it is highly likely that CMV does drive the accumulation of CD8+ late-stage memory cells, but the relationship of this to senescence remains unclear. […] In CMV-seropositive people, especially older people, a remarkably high fraction of circulating CD8+ T lymphocytes is often found to be specific for CMV. However, although the proportion of naïve CD8+ T cells is lower in the old than the young whether or not they are CMV-infected, the gross accumulation of late-stage differentiated CD8+ T cells only occurs in CMV-seropositive individuals […] It is not clear whether this is adaptive or pathological […] The total CMV-specific T-cell response in seropositive subjects constitutes on average approximately 10 % of both the CD4+ and CD8+ memory compartments, and can be far greater in older people. […] there are some published data suggesting that that in young humans or young mice, CMV may improve immune responses to some antigens and to influenza virus, probably by way of increased pro-inflammatory responses […] observations suggest that the effect of CMV on the immune system may be highly dependent also on an individuals’ age and circumstances, and that what is viewed as ageing is in fact later collateral damage from immune reactivity that was beneficial in earlier life [47, 48]. This is saying nothing more than that the same immune pathology that always accompanies immune responses to acute viruses is also caused by CMV, but over a chronic time scale and usually subclinical. […] data suggest that the remodeling of the T-cell compartment in the presence of a latent infection with CMV represents a crucial adaptation of the immune system towards the chronic challenge of lifelong CMV.”

The authors take issue with using the term ‘senescence’ to describe some of the changes discussed above, because this term by definition should be employed only in the context of changes that are demonstrably deleterious to health. It should be kept in mind in this context that insufficient immunological protection against CMV in old age could easily be much worse than the secondary inflammatory effects, harmful though these may well be; CMV in the context of AIDS, organ transplantation (“CMV is the most common and single most important viral infection in solid organ transplant recipients” – medscape) and other disease states involving compromised immune systems can be really bad news (“Disease caused by human herpesviruses tends to be relatively mild and self-limited in immunocompetent persons, although severe and quite unusual disease can be seen with immunosuppression.” Holmes et al.)

“The role of CMV in the etiology of […] age-associated diseases is currently under intensive investigation […] in one powerful study, the impact of CMV infection on mortality was investigated in a cohort of 511 individuals aged at least 65 years at entry, who were then followed up for 18 years. Infection with CMV was associated with an increased mortality rate in healthy older individuals due to an excess of vascular deaths. It was estimated that those elderly who were CMV- seropositive at the beginning of the study had a near 4-year reduction in lifespan compared to those who were CMV-seronegative, a striking result with major implications for public health [59]. Other data, such as those from the large US NHANES-III survey, have shown that CMV seropositivity together with higher than median levels of the inflammatory marker CRP correlate with a significantly lower 10-year survival rate of individuals who were mostly middle-aged at the start of the study [63]. Further evidence comes from a recently published Newcastle 85+ study of the immune parameters of 751 octogenarians investigated for their power to predict survival during a 65-month follow-up. It was documented that CMV-seropositivity was associated with increased 6-year cardiovascular mortality or death from stroke and myocardial infarction. It was therefore concluded that CMV-seropositivity is linked to a higher incidence of coronary heart disease in octogenarians and that senescence in both the CD4+ and CD8+ T-cell compartments is a predictor of overall cardiovascular mortality”.

“The incidence and severity of many infections are increased in older adults. Influenza causes approximately 36,000 deaths and more than 100,000 hospitalizations in the USA every year […] Vaccine uptake differs tremendously between European countries with more than 70 % of the older population being vaccinated against influenza in The Netherlands and the United Kingdom, but below 10 % in Poland, Latvia and Estonia during the 2012–2013 season […] several systematic reviews and meta-analyses have estimated the clinical efficacy and/or effectiveness of a given influenza vaccine, taking into consideration not only randomized trials, but also cohort and case-control studies. It can be concluded that protection is lower in the old than in young adults […] [in one study including “[m]ore than 84,000 pneumococcal vaccine-naïve persons above 65 years of age”] the effect of age on vaccine efficacy was studied and the statistical model showed a decline of vaccine efficacy for vaccine-type CAP and IPD [Invasive Pneumococcal Disease] from 65 % (95 % CI 38–81) in 65-year old subjects, to 40 % (95 % CI 17–56) in 75-year old subjects […] The most effective measure to prevent infectious disease is vaccination. […] Over the last 20–30 years tremendous progress has been achieved in developing novel/improved vaccines for children, but a lot of work still needs to be done to optimize vaccines for the elderly.”

December 12, 2016 Posted by | books, diabetes, medicine | Leave a comment

Integrated Diabetes Care (I)

I’ll start out by quoting from my goodreads review of the book:

The book provides a good overview of studies and clinical trials which have attempted to improve the coordination of diabetes treatment in specific areas. The book covers research from all over the world – the UK, the US, Hong Kong, South Africa, Germany, Netherlands, Sweden, Australia. The language of the publication is quite good, considering the number of non-native English speaking contributors. An at least basic understanding of medical statistics is probably required for one to properly read and understand this book in full.

The book is quite good if you want to understand how people have tried to improve (mainly type 2) diabetes treatment ‘from an organizational point of view’ (the main focus here is not on new treatment options, but on how to optimize care delivery and make the various care providers involved work better together, in a way that improves outcomes for patients (at an acceptable cost?), which is to a large extent an organizational problem), but it’s actually also probably quite a nice book if you simply want to know more about how diabetes treatment systems differ across countries; the contributors don’t assume that the readers know how e.g. the Swedish approach to diabetes care differs from that of e.g. Pennsylvania, so many chapters contain interesting details on how specific countries/health care providers handle specific aspects of e.g. care delivery or finance.

What people mean by ‘integrated care’ varies a bit depending on whom you ask (patients and service providers may emphasize different dimensions when thinking about these topics), as should also be clear from the quotes below; however I assumed it might be a good idea to start out the post with the quote above, so that people who might have no idea what ‘integrated diabetes care’ is did not start out reading the post completely in the dark. In short, a big problem in health service delivery contexts is that care provision is often fragmented and uncoordinated, for many reasons. Ideally you might like doctors working in general practice to collaborate smoothly and efficiently with hospital staff and various other specialists involved in diabetes care (…and perhaps also with social services and mental health care providers…), but that kind of coordination often doesn’t happen, leading to what may well be sub-optimal care provision. Collaboration and a ‘desirable’ (whatever that might mean) level of coordination between service providers doesn’t happen automatically; it takes money, effort and a lot of other things (that the book covers in some detail…) to make it happen – and so often it doesn’t happen, at least there’s a lot of room for improvement even in places where things work comparatively well. Some quotes from the book on these topics:

“it is clear that in general, wherever you are in the world, service delivery is now fragmented [2]. Such fragmentation is a manifestation of organisational and financial barriers, which divide providers at the boundaries of primary and secondary care, physical and mental health care, and between health and social care. Diverse specific organisational and professional cultures, and differences in terms of governance and accountability also contribute to this fragmentation [2]. […] Many of these deficiencies are caused by organisational problems (barriers, silo thinking, accountability for budgets) and are often to the detriment of all of those involved: patients, providers and funders – in extreme cases – leading to lose-lose-lose-situations […] There is some evidence that integrated care does improve the quality of patient care and leads to improved health or patient satisfaction [10, 11], but evidence of economic benefits remain an issue for further research [10]. Failure to improve integration and coordination of services along a “care continuum” can result in suboptimal outcomes (health and cost), such as potentially preventable hospitalisation, avoidable death, medication errors and adverse drug events [3, 12, 13].”

Integrated care is often described as a continuum [10, 24], actually depicting the degree of integration. This degree can range from linkage, to coordination and integration [10], or segregation (absence of any cooperation) to full integration [25], in which the integrated organisation is responsible for the full continuum of care responsible for the full continuum of care […] this classification of integration degree can be expanded by introducing a second dimension, i.e., the user needs. User need should be defined by criteria, like stability and severity of condition, duration of illness (chronic condition), service needed and capacity for self-direction (autonomy). Accordingly, a low level of need will not require a fully integrated system, then [10, 24] […] Kaiser Permanente is a good example of what has been described as a “fully integrated system. […] A key element of Kaiser Permanente’s approach to chronic care is the categorisation of their chronically ill patients into three groups based on their degree of need“.

It may be a useful simplification to think along the lines of: ‘Higher degree of need = a higher level of integration becomes desirable/necessary. Disease complexity is closely related to degree of need.’ Some related observations from the book:

“Diabetes is a condition in which longstanding hyperglycaemia damages arteries (causing macrovascular, e.g., ischaemic heart, peripheral and cerebrovascular disease, and microvascular disease, e.g., retinopathy, nephropathy), peripheral nerves (causing neuropathy), and other structures such as skin (causing cheiroarthropathy) and the lens (causing cataracts). Different degrees of macrovascular, neuropathic and cutaneous complications lead to the “diabetic foot.” A proportion of patients, particularly with type 2 diabetes have metabolic syndrome including central adiposity, dyslipidaemia, hypertension and non alcoholic fatty liver disease. Glucose management can have severe side effects, particularly hypoglycaemia and weight gain. Under-treatment is not only associated with long term complications but infections, vascular events and increased hospitalisation. Absence of treatment in type 1 diabetes can rapidly lead to diabetic keto-acidosis and death. Diabetes doubles the risk for depression, and on the other hand, depression may increase the risk for hyperglycaemia and finally for complications of diabetes [41]. Essentially, diabetes affects every part of the body once complications set in, and the crux of diabetes management is to normalise (as much as possible) the blood glucose and manage any associated risk factors, thereby preventing complications and maintaining the highest quality of life. […] glucose management requires minute by minute, day by day management addressing the complexity of diabetes, including clinical and behavioural issues. While other conditions also have the patient as therapist, diabetes requires a fully empowered patient with all of the skills, knowledge and motivation every hour of the waking day. A patient that is fully engaged in self-management, and has support systems, is empowered to manage their diabetes and will likely experience better outcomes compared with those who do not have access to this support. […] in diabetes, the boundaries between primary care and secondary care are blurred. Diabetes specialist services, although secondary care, can provide primary care, and there are GPs, diabetes educators, and other ancillary providers who can provide a level of specialist care.”

In short, diabetes is a complex disease – it’s one of those diseases where a significant degree of care integration is likely to be necessary in order to achieve even close to optimal outcomes. A little more on these topics:

“The unique challenge to providers is to satisfy two specific demands in diabetes care. The first is to anticipate and recognize the onset of complications through comprehensive diabetes care, which demands meticulous attention to a large number of process-of-care measures at each visit. The second, arguably greater challenge for providers is to forestall the development of complications through effective diabetes care, which demands mastery over many different skills in a variety of distinct fields in order to achieve performance goals covering multiple facets of management. Individually and collectively, these dual challenges constitute a virtually unsustainable burden for providers. That is because (a) completing all the mandated process measures for comprehensive care requires far more time than is traditionally available in a single patient visit; and (b) most providers do not themselves possess skills in all the ancillary disciplines essential for effective care […] Diabetes presents patients with similarly unique dual challenges in mastering diabetes self-management with self-awareness, self-empowerment and self-confidence. Comprehensive Diabetes Self-Management demands the acquisition of a variety of skills in order to fulfil a multitude of tasks in many different areas of daily life. Effective Diabetes Self-Management, on the other hand, demands constant vigilance, consistent discipline and persistent attention over a lifetime, without respite, to nutritional self-discipline, monitoring blood glucose levels, and adherence to anti-diabetic medication use. Together, they constitute a burden that most patients find difficult to sustain even with expert assistance, and all-but-impossible without it.”

“Care coordination achieves critical importance for diabetes, in particular, because of the need for management at many different levels and locations. At the most basic level, the symptomatic management of acute hypo- and hyperglycaemia often devolves to the PCP [primary care provider], even when a specialist oversees more advanced strategies for glycaemic management. At another level, the wide variety of chronic complications requires input from many different specialists, whereas hospitalizations for acute emergencies often fall to hospitalists and critical care specialists. Thus, diabetes care is fraught with the potential for sometimes conflicting, even contradictory management strategies, making care coordination mandatory for success.”

“Many of the problems surrounding the provision of adequate person-centred care for those with diabetes revolve around the pressures of clinical practice and a lack of time. Good diabetes management requires attention to a number of clinical parameters
1. (Near) Normalization of blood glucose
2. Control of co-morbidities and risk factors
3. Attainment of normal growth and development
4. Prevention of Acute Complications
5. Screening for Chronic Complications
To fit all this and a holistic, patient-centred collaborative approach into a busy general practice, the servicing doctor and other team members must understand that diabetes cannot be “dealt with” coincidently during a patient consultation for an acute condition.”

“Implementation of the team model requires sharing of tasks and responsibilities that have traditionally been the purview of the physician. The term “team care” has traditionally been used to indicate a group of health-care professionals such as physicians, nurses, pharmacists, or social workers, who work together in caring for a group of patients. In a 2006 systematic review of 66 trials testing 11 strategies for improving glycaemic control for patients with diabetes, only team care and case management showed a significant impact on reducing HbA1c levels [18].”

Moving on, I found the chapter about Hong Kong interesting, for several reasons. The quality of Scandinavian health registries are probably widely known in the epidemiological community, but I was not aware of Hong Kong’s quality of diabetes data, and data management strategies, which seems to be high. Nor was I aware of some of the things they’ve discovered while analyzing those data. A few quotes from that part of the coverage:

“Given the volume of patients in the clinics, the team’s earliest work from the HKDR [Hong Kong Diabetes Registry, US] prioritized the development of prediction models, to allow for more efficient, data-driven risk stratification of patients. After accruing data for a decade on over 7000 patients, the team established 5-year probabilities for major diabetes-related complications as defined by the International Code for Diseases retrieved from the CMS [Clinical Management System, US]. These included end stage renal disease [7], stroke [8], coronary heart disease [9], heart failure [10], and mortality [11]. These risk equations have a 70–90 % sensitivity and specificity of predicting outcomes based on the parameters collected in the registry.”

“The lifelong commitments to medication adherence and lifestyle modification make diabetes self-management both physically and emotionally taxing. The psychological burdens result from insulin injection, self-monitoring of blood glucose, dietary restriction, as well as fear of complications, which may significantly increase negative emotions in patients with diabetes. Depression, anxiety, and distress are prevalent mental afflictions found in patients with diabetes […] the prevalence of depression was 18.3 % in Hong Kong Chinese patients with type 2 diabetes. Furthermore, depression was associated with poor glycaemic control and self-reported hypoglycaemia, in part due to poor adherence […] a prospective study involving 7835 patients with type 2 diabetes without cardiovascular disease (CVD) at baseline […] found that [a]fter adjusting for conventional risk factors, depression was independently associated with a two to threefold increase in the risk of incident CVD [22].”

“Diabetes has been associated with increased cancer risk, but the underlying mechanism is poorly understood. The linkage between the longitudinal clinical data within the HKDR and the cancer outcome data in the CMS has provided important observational findings to help elucidate these connections. Detailed pharmacoepidemiological analyses revealed attenuated cancer risk in patients treated with insulin and oral anti-diabetic drugs compared with non-users of these drugs”

“Among the many challenges of patient self-management, lack of education and empowerment are the two most cited barriers [59]. Sufficient knowledge is unquestionably important in self-care, especially in people with low health literacy and limited access to diabetes education. Several systematic reviews [have] showed that self-management education with comprehensive lifestyle interventions improved glycaemic and cardiovascular risk factor control [60–62].”

“Clinical trials are expensive because of the detail and depth of data required on each patient, which often require separate databases to be developed outside of the usual-care electronic medical records or paper-based chart systems. These databases must be built, managed, and maintained from scratch every time, often requiring double-entry of data by research staff. The JADE [Joint Asia Diabetes Evaluation] programme provides a more efficient means of collecting the key clinical variables in its comprehensive assessments, and allows researchers to add new fields as necessary for research purposes. This obviates the need for redundant entry into non-clinical systems, as the JADE programme is simultaneously a clinical care tool and prospective database. […] A large number of trials fail because of inadequate recruitment [67]. The JADE programme has allowed for ready identification of eligible clinical trial participants because of its detailed clinical database. […] One of the greatest challenges in clinical trials is maintaining the contact between researchers and patients over many years. […] JADE facilitates long-term contact with the patient, as part of routine periodic follow-up. This also allows researchers to evaluate longer term outcomes than many previous trials, given the great expense in maintaining databases for the tracking of longitudinal outcomes.”

Lastly, some stuff on cost and related matters from the book:

“Diabetes imposes a massive economic burden on all healthcare systems, accounting for 11 % of total global healthcare expenditure on adults in 2013.”

“Often, designated service providers institute managed care programmes to standardize and control care rendered in a safe and cost-effective manner. However, many of these programmes concentrate on cost-savings rather than patient service utilization and improved clinical outcomes. [this part of the coverage is from South Africa, but these kinds of approaches are definitely not limited to SA – US] […] While these approaches may save some costs in the short-term, Managed Care Programmes which do not address patient outcomes nor reduce long term complications, ignore the fact that that the majority of the costs for treating diabetes, even in the medium term, are due to the treatment of acute and chronic complications and for inpatient hospital care [14]. Additionally, it is well established that poor long-term clinical outcomes increase the cost burden of managing the patient with diabetes by up to 250 %. […] overall, the costs of medication, including insulin, accounts for just 7 % of all healthcare costs related to diabetes [this number varies across countries, I’ve seen estimates of 15% in the past – and as does the out-pocket share of that cost – but the costs of medications constitute a relatively small proportion of the total costs of diabetes everywhere you look, regardless of health care system and prevalence. If you include indirect costs as well, which you should, this becomes even more obvious – US]”

“[A] study of the Economic Costs of Diabetes in the U.S. in 2012 [25] showed that for people with diabetes, hospital inpatient care accounted for 43 % of the total medical cost of diabetes.”

“There is some evidence of a positive impact of integrated care programmes on the quality of patient care [10, 34]. There is also a cautious appraisal that warns that “Even in well-performing care groups, it is likely to take years before cost savings become visible” […]. Based on a literature review from 1996 to 2004 Ouwens et al. [11] found out that integrated care programmes seemed to have positive effects on the quality of care. […] because of the variation in definitions of integrated care programmes and the components used cover a broad spectrum, the results should be interpreted with caution. […] In their systematic review of the effectiveness of integrated care Ouwens et al. [11] could report on only seven (about 54 %) reviews which had included an economic analysis. Four of them showed financial advantages. In their study Powell Davies et al. [34] found that less than 20 % of studies that measured economic outcomes found a significant positive result. Similarly, de Bruin et al. [37] evaluated the impact of disease management programmes on health-care expenditures for patients with diabetes, depression, heart failure or chronic obstructive pulmonary disease (COPD). Thirteen studies of 21 showed cost savings, but the results were not statistically significant, or not actually tested for significance. […] well-designed economic evaluation studies of integrated care approaches are needed, in particular in order to support decision-making on the long-term financing of these programmes [30, 39]. Savings from integrated care are only a “hope” as long as there is no carefully designed economic analysis with a kind of full-cost accounting.”

“The cost-effectiveness of integrated care for patients with diabetes depends on the model of integrated care used, the system in which it is used, and the time-horizon chosen [123]. Models of cost benefit for using health coaching interventions for patients with poorly controlled diabetes have generally found a benefit in reducing HbA1c levels, but at the cost of paying for the added cost of health coaching which is not offset in the short term by savings from emergency department visits and hospitalizations […] An important question in assessing the cost of integrated care is whether it needs to be cost-saving or cost-neutral to be adopted, or is it enough to increase quality-adjusted life years (QALYs) at a “reasonable” cost (usually pegged at between $30,000 and $60,000 per QALY saved). Most integrated care programmes for patients with diabetes that have been evaluated for cost-effectiveness would meet this more liberal criterion […] In practice, integrated care programmes for patients with diabetes are often part of generalized programmes of care for patients with other chronic medical conditions, making the allocation of costs and savings with respect to integrated care for diabetes difficult to estimate. At this point, integrated care for patients with diabetes appears to be a widely accepted goal. The question becomes: which model of integrated care is most effective at reasonable cost? Answering this question depends both on what costs are included and what outcomes are measured; the answers may vary among different patient populations and different care systems.”

December 6, 2016 Posted by | books, diabetes, economics, medicine | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.

ii.

iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | books, Chemistry, Chess, data, dating, demographics, genetics, Geography, immigration, Paleontology, papers, Physics, Psychology, random stuff, religion | Leave a comment

Role of Biomarkers in Medicine

“The use of biomarkers in basic and clinical research has become routine in many areas of medicine. They are accepted as molecular signatures that have been well characterized and repeatedly shown to be capable of predicting relevant disease states or clinical outcomes. In Role of Biomarkers in Medicine, expert researchers in their individual field have reviewed many biomarkers or potential biomarkers in various types of diseases. The topics address numerous aspects of medicine, demonstrating the current conceptual status of biomarkers as clinical tools and as surrogate endpoints in clinical research.”

The above quote is from the preface of the book. Here’s my goodreads review. I have read about biomarkers before – for previous posts on this topic, see this link. I added the link in part because the coverage provided in this book is in my opinion generally of a somewhat lower quality than is the coverage that has been provided in some of the other books I’ve read on these topics. However the fact that the book is not amazing should probably not keep me from sharing some observations of interest from the book, which I have done in this post.

we suggest more precise studies to establish the exact role of this hormone […] additional studies are necessary […] there are conflicting results […] require further investigation […] more intervention studies with long-term follow-up are required. […] further studies need to be conducted […] further research is needed (There are a lot of comments like these in the book, I figured I should include a few in my coverage…)

“Cancer biomarkers (CB) are biomolecules produced either by the tumor cells or by other cells of the body in response to the tumor, and CB could be used as screening/early detection tool of cancer, diagnostic, prognostic, or predictor for the overall outcome of a patient. Moreover, cancer biomarkers may identify subpopulations of patients who are most likely to respond to a given therapy […] Unfortunately, […] only very few CB have been approved by the FDA as diagnostic or prognostic cancer markers […] 25 years ago, the clinical usefulness of CB was limited to be an effective tool for patient’s prognosis, surveillance, and therapy monitoring. […] CB have [since] been reported to be used also for screening of general population or risk groups, for differential diagnosis, and for clinical staging or stratification of cancer patients. Additionally, CB are used to estimate tumor burden and to substitute for a clinical endpoint and/or to measure clinical benefit, harm or lack of benefit, or harm [4, 18, 30]. Among commonly utilized biomarkers in clinical practice are PSA, AFP, CA125, and CEA.”

“Bladder cancer (BC) is the second most common malignancy in the urologic field. Preoperative predictive biomarkers of cancer progression and prognosis are imperative for optimizing […] treatment for patients with BC. […] Approximately 75–85% of BC cases are diagnosed as nonmuscle-invasive bladder cancer (NMIBC) […] NMIBC has a tendency to recur (50–70%) and may progress (10–20%) to a higher grade and/or muscle-invasive BC (MIBC) in time, which can lead to high cancer-specific mortality [2]. Histological tumor grade is one of the clinical factors associated with outcomes of patients with NMIBC. High-grade NMIBC generally exhibits more aggressive behavior than low-grade NMIBC, and it increases the risk of a poorer prognosis […] Cystoscopy and urine cytology are commonly used techniques for the diagnosis and surveillance of BC. Cystoscopy can identify […] most papillary and solid lesions, but this is highly invasive […] urine cytology is limited by examiner experience and low sensitivity. For these reasons, some tumor markers have been investigated […], but their sensitivity and specificity are limited [5] and they are unable to predict the clinical outcome of BC patients. […] Numerous efforts have been made to identify tumor markers. […] However, a serum marker that can serve as a reliable detection marker for BC has yet to be identified.”

“Endometrial cancer (EmCa) is the most common type of gynecological cancer. EmCa is the fourth most common cancer in the United States, which has been linked to increased incidence of obesity. […] there are no reliable biomarker tests for early detection of EmCa and treatment effectiveness. […] Approximately 75% of women with EmCa are postmenopausal; the most common symptom is postmenopausal bleeding […] Approximately 15% of women diagnosed with EmCa are younger than 50 years of age, while 5% are diagnosed before the age of 40 [29]. […] Roughly, half of the EmCa cases are linked to obesity. Obese women are four times more likely to develop EmCa when compared to normal weight women […] Obese individuals oftentimes exhibit resistance to leptin and show high levels of the adipokine in blood, which is known as leptin resistance […] prolonged exposure of leptin damages the hypothalamus causing it to become insensitive to the effects of leptin […] Evidence shows that leptin is an important pro-inflammatory, pro-angiogenic, and mitogenic factor for cancer. Leptin produced by cancer cells acts in an autocrine and paracrine manner to promote tumor cell proliferation, migration and invasion, pro-inflammation, and angiogenesis [58, 70]. High levels of leptin […] are associated with metastasis and decreased survival rates in breast cancer patients [58]. […] Metabolic syndrome including obesity, hypertension, insulin resistance, diabetes, and dyslipidemia increase the risk of developing multiple malignancies, particularly EmCa [30]. Younger women diagnosed with EmCa are usually obese, and their carcinomas show a well-differentiated histology [20].

“Normally, tumor suppressor genes act to inhibit or arrest cell proliferation and tumor development [37]. However; when mutated, tumor suppressors become inactive, thus permitting tumor growth. For example, mutations in p53 have been determined in various cancers such as breast, colon, lung, endometrium, leukemias, and carcinomas of many tissues. These p53 mutations are found in approximately 50% of all cancers [38]. Roughly 10–20% of endometrial carcinomas exhibit p53 mutations [37]. […] overexpression of mutated tumor suppressor p53 has been associated with Type II EmCa (poor histologic grade, non-endometrioid histology, advanced stage, and poor survival).”

“Increasing data indicate that oxidative stress is involved in the development of DR [diabetic retinopathy] [16–19]. The retina has a high content of polyunsaturated fatty acids and has the highest oxygen uptake and glucose oxidation relative to any other tissue. This phenomenon renders the retina more susceptible to oxidative stress [20]. […] Since long-term exposure to oxidative stress is strongly implicated in the pathogenesis of diabetic complications, polymorphic genes of detoxifying enzymes may be involved in the development of DR. […] A meta-analysis comprising 17 studies, including type 1 and type 2 diabetic patients from different ethnic origins, implied that the C (Ala) allele of the C47T polymorphism in the MnSOD gene had a significant protective effect against microvascular complications (DR and diabetic nephropathy) […] In the development of DR, superoxide levels are elevated in the retina, antioxidant defense system is compromised, MnSOD is inhibited, and mitochondria are swollen and dysfunctional [77,87–90]. Overexpression of MnSOD protects [against] diabetes-induced mitochondrial damage and the development of DR [19,91].”

Continuous high level of blood glucose in diabetes damages micro and macro blood vessels throughout the body by altering the endothelial cell lining of the blood vessels […] Diabetes threatens vision, and patients with diabetes develop cataracts at an earlier age and are nearly twice as likely to get glaucoma compared to non-diabetic[s] [3]. More than 75% of patients who have had diabetes mellitus for more than 20 years will develop diabetic retinopathy (DR) [4]. […] DR is a slow progressive retinal disease and occurs as a consequence of longstanding accumulated functional and structural impairment of the retina by diabetes. It is a multifactorial condition arising from the complex interplay between biochemical and metabolic abnormalities occurring in all cells of the retina. DR has been classically regarded as a microangiopathy of the retina, involving changes in the vascular wall leading to capillary occlusion and thereby retinal ischemia and leakage. And more recently, the neural defects in the retina are also being appreciated […]. Recently, various clinical investigators [have detected] neuronal dysfunction at very early stages of diabetes and numerous abnormalities in the retina can be identified even before the vascular pathology appears [76, 77], thus suggesting a direct effect of diabetes on the neural retina. […] An emerging issue in DR research is the focus on the mechanistic link between chronic low-grade inflammation and angiogenesis. Recent evidence has revealed that extracellular high-mobility group box-1 (HMGB1) protein acts as a potent proinflammatory cytokine that triggers inflammation and recruits leukocytes to the site of tissue damage, and exhibits angiogenic effects. The expression of HMGB1 is upregulated in epiretinal membranes and vitreous fluid from patients with proliferative DR and in the diabetic retina. […] HMGB1 may be a potential biomarker [for diabetic retinopathy] […] early blockade of HMGB1 may be an effective strategy to prevent the progression of DR.”

“High blood pressure is one of the leading risk factors for global mortality and is estimated to have caused 9.4 million deaths in 2010. A meta‐analysis which includes 1 million individuals has indicated that death from both CHD [coronary heart disease] and stroke increase progressively and linearly from BP levels as low as 115 mmHg systolic and 75 mmHg diastolic upwards [138]. The WHO [has] pointed out that a “reduction in systolic blood pressure of 10 mmHg is associated with a 22% reduction in coronary heart disease, 41% reduction in stroke in randomized trials, and a 41–46% reduction in cardiometabolic mortality in epidemiological studies” [139].”

Several reproducible studies have ascertained that individuals with autism demonstrate an abnormal brain 5-HT system […] peripheral alterations in the 5-HT system may be an important marker of central abnormalities in autism. […] In a recent study, Carminati et al. [129] tested the therapeutic efficacy of venlafaxine, an antidepressant drug that inhibits the reuptake of 5-HT, and [found] that venlafaxine at a low dose [resulted in] a substantial improvement in repetitive behaviors, restricted interests, social impairment, communication, and language. Venlafaxine probably acts via serotonergic mechanisms  […] OT [Oxytocin]-related studies in autism have repeatedly reported lower blood OT level in autistic patients compared to age- and gender-matched control subjects […] autistic patients demonstrate an altered neuroinflammatory response throughout their lives; they also show increased astrocyte and microglia inflammatory response in the cortex and the cerebellum  [47, 48].”

November 3, 2016 Posted by | books, cancer, diabetes, genetics, medicine, Neurology, Pharmacology | Leave a comment

Water Supply in Emergency Situations (II)

Here’s my first post about the book. In this post I’ve added a few more quotes from a couple of the last chapters of the book:

“Due to the high complexity of the [water supply] systems, and the innumerable possible points of contaminant insertion, complete prevention of all possible terror attacks (chemical, biological, or radiological) on modern drinking water supplying systems […] seems to be an impossible goal. For example, in the USA there are about 170,000 water systems, with about 8,100 very large systems that serve 90% of the population who get water from a community water system […] The prevailing approach to the problem of drinking water contamination is based on the implementation of surveillance measures and technologies for “risk reduction” such as improvement of physical security measures of critical assets (high-potential vulnerability to attacks), [and] installation of online contaminant monitoring systems (OCMS) with capabilities to detect and warn in real time on relevant contaminants, as part of standard operating procedures for quality control (QC) and supervisory control and data acquisition (SCADA) systems. […] Despite the impressive technical progress in online water monitoring technologies […] detection with complete certainty of pollutants is expensive, and remains problematic.”

“A key component of early warning systems is the availability of a mathematical model for predicting the transport and fate of the spill or contaminant so that downstream utilities can be warned. […] Simulation tools (i.e. well-calibrated hydraulic and water quality models) can be linked to SCADA real-time databases allowing for continuous, high-speed modeling of the pressure, flow, and water quality conditions throughout the water distribution network. Such models provide the operator with computed system status data within the distribution network. These “virtual sensors” complement the measured data. Anomalies between measured and modeled data are automatically observed, and computed values that exceed predetermined alarm thresholds are automatically flagged by the SCADA system.”

“Any given tap receives water, which arrives though a number of pipes in the supply network, the transport route, and ultimately comes from a source […] in order to achieve maximum supply security in case of pipe failures or unusual demand patterns (e.g. fire flows) water supply networks are generally designed as complicated, looped systems, where each tap typically can receive water from several sources and intermediate storage facilities. This means that the water from any given tap can arrive through several different routes and can be a mixture of water from several sources. The routes and sources for a given tap can vary over time […] A model can show: *Which sources (well-fields, reservoirs, and tanks) contribute to the supply of which parts of the city? *Where does the water come from (percentage distribution) at any specific location in the system (any given tap or pipe)? *How long has the water been traveling in the pipe system, before it reaches a specific location?
One way to reduce the risk – and simplify the response to incidents – is by compartmentalizing the water supply system. If each tap receives water from one and only one reservoir pollution of one reservoir will affect one well-defined and relatively smaller part of the city. Compartmentalizing the water supply system will reduce the spreading of toxic substances. On the flip side, it may increase the concentration of the toxic substance. It is also likely to have a negative impact on the supply of water for fire flow and on the robustness of the water supply network in case of failures of pipes or other elements.”

An important point in the context of that part of the coverage is that if you want online (i.e. continuous, all-the-time) monitoring of drinking water, well, that’s going to be expensive regardless of how precisely you’re going to go about doing it. Another related problem is that it’s actually not really a simple matter to figure out what it even makes sense to test for when you’re analyzing the water (you can’t test for ‘everything’ all the time, and so the leading approach in monitoring systems employed today is according to the authors based on the idea of using ‘surrogate parameters’ which may be particularly informative about any significant changes in the quality of the drinking water taking place.

“After the collapse of the Soviet Union, the countries of the South Caucasus gained their independence. However, they faced problems associated with national and transboundary water management. Transboundary water management remains one of the key issues leading to conflict in the region today. The scarcity of water especially in downstream areas is a major problem […] The fresh surface water resources of the South Caucasus mainly consist of runoff from the KuraAraz River basins. […] Being a water-poor region, water supply over the Azerbaijan Republic territory totals about 100,000 /km2, which amounts to an average of about 1,000 of water per person per year. Accordingly, Azerbaijan Republic occupies one of the lowest рlaces in the world in water availability. Water resources of the Republic are distributed very irregularly over administrative districts.”

Water provision [in Azerbaijan] […] is carried out by means of active hydrotechnical constructions, which are old-fashioned and many water intake facilities and treatment systems cannot operate during high flooding, water turbidity, and extreme pollution. […] Tap water satisfies [the] needs of only 50% of the population, and some areas experience lack of drinking water. Due to the lack of water supply networks and deteriorated conditions of those existing, about half of the water is lost within the distribution system. […] The sewage system of the city of Baku covers only 70% of its territory and only about half of sewage is treated […] Owing to rapid growth of turbidity of Kura (and its inflows) during high water the water treatment facilities are rendered inoperable thus causing failures in the water supply of the population of the city of Baku. Such situations mainly take place in autumn and spring on the average 3–5 times a year for 1–2 days. In the system of centralized water supply of the city of Baku about 300 emergency cases occur annually […] Practically nobody works with the population to promote efficient water use practices.”

October 31, 2016 Posted by | books, Engineering, Geography | Leave a comment

Diabetic nephropathies

Bakris et al.‘s text on this topic is the first book I’ve read specifically devoted to the topic of DN. As I pointed out on goodreads, “this is a well-written and interesting work which despite the low page count cover quite a bit of ground. A well-sourced and to-the-point primer on these topics.” Below I have added a few observations from the book.

“Diabetic nephropathy (DN), also known as diabetic kidney disease (DKD), is one of the most important long-term complications of diabetes and the most common cause of endstage renal disease (ESRD) worldwide. DKD […] is defined as structural and functional renal damage manifested as clinically detected albuminuria in the presence of normal or abnormal glomerular filtration rate (GFR). […] Patients with DKD […] account for one-third of patients demanding renal transplantation. […] in the United States, Medicare expenditure on treating ESRD is approximately US $33 billion (as of 2010), which accounts for 8–9 % of the total annual health-care budget […] According to the United States Renal Data System […], the incidence of ESRD requiring RRT [in 2012] was 114,813 patients, with 44 % due to DKD [9]. A registry report from Japan revealed a nearly identical relative incidence, with 44.2 % of the patients with ESRD caused by diabetes”

Be careful not to confuse incidence and prevalence here; the proportion of diabetics diagnosed with ESDR in any given year is almost certainly higher than the proportion of people with ESDR who have diabetes, because diabetics with kidney failure die at a higher rate than do other people with kidney failure. This problem/fact tends to make some questions hard to answer; to give an example, how large a share of the total costs that diabetics contribute to the whole kidney disease component of medical costs seems to me to be far from an easy question to answer, because you in some sense are not really making an apples-to-apples comparison, and a lot might well depend on the chosen discount rate and how to address the excess mortality in the diabetes sample; and even ‘simply’ adding up medical outlays for the diabetes- and non-diabetes samples would require a lot of data (which may not be available) and work. You definitely cannot just combine the estimates provided above, and assume that the 44% incidence translates into 44% of people with ESDR having diabetes; it’s not clear in the text where the ‘one-third of patients’ number above comes from, but if that’s also US data then it should be obvious from the difference between these numbers that there’s a lot of excess mortality here in the diabetes sample (I have included specific data from the publication on these topics below). The book also talks about the fact that the type of dialysis used in a case of kidney failure will to some extent depend on the health status of the patient, and that diabetes is a significant variable in that context; this means that the available/tolerable treatment options for the kidney disease component may not be the same in the case of a diabetic and a case of a patient with, say, lupus nephritis, and it also means that the patient groups most likely are not ‘equally sick’, so basing cost estimates on cost averages might lead to misleading results if severity of disease and (true) treatment costs are related, as they usually are.

“A recent analysis revealed an estimated diabetes prevalence of 12–14 % among adults in the United States […] In the age group ≥65 years, this amounts to more than 20 %”.

It should be emphasized in the context of the above numbers that the prevalence of DKD is highly variable across countries/populations – the authors also include in the book the observation that: “Over a period of 20 years, 32 studies from 16 countries revealed a prevalence ranging from 11 to 83 % of patients with diabetes”. Some more prevalence data:

“DKD affects about 30 % of patients with type 1 diabetes and 25–40 % of the patients with type 2 diabetes. […] The global prevalence of micro- and macroalbuminuria is estimated at 39 % and 10 %, respectively […] (NHANES III) […] reported a prevalence of 35 % (microalbuminuria) and 6 % (macroalbuminuria) in patients with T2DM aged ≥40 years [24]. In another study, this was reported to be 43 % and 12 %, respectively, in a Japanese population [23]. According to the European Diabetes (EURODIAB) Prospective Complications Study Group, in patients with T1DM, the incidence of microalbuminuria was 12.6 % (over 7.3 years) [25]. This prevalence was further estimated at 33 % in an 18-year follow-up study in Denmark […] In the United Kingdom Prospective Diabetes Study (UKPDS), proteinuria [had] a peak incidence after around 15–20 years after diabetes diagnosis.”

I won’t cover the pathophysiology parts in too much detail here, but a few new things I learned does need to be mentioned:

“A natural history of DKD was first described in the 1970s by Danish physicians [32]. It was characterized by a long silent period without overt clinical signs and symptoms of nephropathy and progression through various stages, starting from hyperfiltration, microalbuminuria, macroalbuminuria, and overt renal failure to ESRD. Microalbuminuria (30–300 mg/day of albumin in urine) is a sign of early DKD, whereas macroalbuminuria (>300 mg/day) represents DKD progression. [I knew this stuff. The stuff that follows below was however something I did not know:]
However, this ‘classical’ natural evolution of urinary albumin excretion and change in GFR is not present in many patients with diabetes, especially those with type 2 diabetes [34]. These patients can have reduction or disappearance of proteinuria over time or can develop even overt renal disease in the absence of proteinuria [30, 35]. […] In the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) of patients with T2DM, 45.2 % of participants developed albuminuria, and 29 % developed renal impairment over a 15-year follow-up period [37]. Of those patients who developed renal impairment, 61 % did not have albuminuria beforehand, and 39 % never developed albuminuria during the study. Of the patients that developed albuminuria, only 24 % subsequently developed renal impairment during the study. A significant degree of discordance between development of albuminuria and renal impairment is apparent [37]. These data, thus, do not support the classical paradigm of albuminuria always preceding renal impairment in the progression of DKD. […] renal hyperfiltration and rapid GFR decline are considered stronger predictors of nephropathy progression in type 1 diabetes than presence of albuminuria [67]. The annual eGFR loss in patients with DKD is >3 mL/min/1.73 m2 or 3.3 % per year.”

As for the last part about renal hyperfiltration, they however also note later in the coverage in a different chapter that “recent long-term prospective surveys cast doubt on the validity of glomerular hyperfiltration being predictive of renal outcome in patients with type 1 diabetes”. Various factors mentioned in the coverage – some of which are very hard to avoid and some of which are actually diabetes-specific – contribute to measurement error, which may be part of the explanation for the sub-optimal performance of the prognostic markers employed.

An important observation I think I have mentioned before here on the blog is that diabetic nephropathy is not just bad because people who develop this complication may ultimately develop kidney failure, but is also bad because diabetics may die before they even do that; diabetics with even moderate stages of nephropathy have high mortality from cardiovascular disease, so if you only consider diabetics who actually develop kidney failure you may miss some of the significant adverse health effects of this complication; it might be argued that doing this would be a bit like analyzing the health outcomes of smokers while only tallying the cancer cases, and ignoring e.g. the smoking-associated excess deaths from cardiovascular disease. Some observations from the book on this topic:

“Comorbid DM and DKD are associated with high cardiovascular morbidity and mortality. The risk of cardiovascular disease is disproportionately higher in patients with DKD than patients with DM who do not have kidney disease [76]. The incident dialysis rate might even be higher after adjusting for patients dying from cardiovascular disease before reaching ESRD stage [19]. The United States Renal Data System (USRDS) data shows that elderly patients with a triad of DM, chronic kidney disease (CKD), and heart failure have a fivefold higher chance of death than progression to CKD and ESRD [36]. The 5-year survival rate for diabetic patients with ESRD is estimated at 20 % […] This is higher than the mortality rate for many solid cancers (including prostate, breast, or renal cell cancer). […] CVD accounts for more than half of deaths of patients undergoing dialysis […] the 5-year survival rate is much lower in diabetic versus nondiabetic patients undergoing hemodialysis […] Adler et al. tested whether HbA1c levels were associated with death in adults with diabetes starting HD or peritoneal dialysis [38]. Of 3157 patients observed for a median time of 2.7 years, 1688 died. [this example provided, I thought, a neat indication of what sort of data you end up with when you look at samples with a 20% 5-year survival rate] […] Despite modern therapies […] most patients continue to show progressive renal damage. This outcome suggests that the key pathogenic mechanisms involved in the induction and progression of DN remain, at least in part, active and unmodified by the presently available therapies.” (my emphasis)

The link between blood glucose (Hba1c) and risk of microvascular complications such as DN is strong and well-documented, but Hba1c does not explain everything:

“Only a subset of individuals living with diabetes […] develop DN, and studies have shown that this is not just due to poor blood glucose control [50–54]. DN appears to cluster in families […] Several consortia have investigated genetic risk factors […] Genetic risk factors for DN appear to differ between patients with type 1 and type 2 diabetes […] The pathogenesis of DN is complex and has not yet been completely elucidated […] [It] is multifactorial, including both genetic and environmental factors […]. Hyperglycemia affects patients carrying candidate genes associated with susceptibility to DN and results in metabolic and hemodynamic alterations. Hyperglycemia alters vasoactive regulators of glomerular arteriolar tone and causes glomerular hyperfiltration. Production of AGEs and oxidative stress interacts with various cytokines such as TGF-β and angiotensin II to cause kidney damage. Additionally, oxidative stress can cause endothelial dysfunction and systemic hypertension. Inflammatory pathways are also activated and interact with the other pathways to cause kidney damage.”

“An early clinical sign of DN is moderately increased urinary albumin excretion, referred to as microalbuminuria […] microalbuminuria has been shown to be closely associated with an increased risk of cardiovascular morbidity and mortality [and] is [thus] not only a biomarker for the early diagnosis of DN but also an important therapeutic target […] Moderately increased urinary albumin excretion that progresses to severely increased albuminuria is referred to as macroalbuminuria […] Severely increased albuminuria is defined as an ACR≥300 mg/g Cr; it leads to a decline in renal function, which is defined in terms of the GFR [8] and generally progresses to ESRD 6–8 years after the onset of overt proteinuria […] patients with type 1 diabetes are markedly younger than type 2 patients. The latter usually develop ESRD in their mid-fifties to mid-sixties. According to a small but carefully conducted study, both type 1 and type 2 patients take an average of 77–81 months from the stage of producing macroproteinuria with near-normal renal function to developing ESRD [17].”

“Patients with diabetes and kidney disease are at increased risk of hypoglycemia due to decreased clearance of some of the medications used to treat diabetes such as insulin, as well as impairment of renal gluconeogenesis from having a lower kidney mass. As the kidney is responsible for about 30–80 % of insulin removal, reduced kidney function is associated with a prolonged insulin half-life and a decrease in insulin requirements as estimated glomerular filtration rate (eGFR) decline […] Metformin [a first-line drug for treating type 2 diabetes, US] should be avoided in patients with an eGFR < 30 mL/min /1.73 m2. It is recommended that metformin is stopped in the presence of situations that are associated with hypoxia or an acute decline in kidney function such as sepsis/shock, hypotension, acute myocardial infarction, and use of radiographic contrast or other nephrotoxic agents […] The ideal medication regimen is based on the specific needs of the patient and physician experience and should be individualized, especially as renal function changes. […] Lower HbA1c levels are associated with higher risks of hypoglycemia so the HbA1c target should be individualized […] Whereas patients with mild renal insufficiency can receive most antihyperglycemic treatments without any concern, patients with CKD stage 3a and, in particular, with CKD stages 3b, 4, and 5 often require treatment adjustments according to the degree of renal insufficiency […] Higher HbA1c targets should be considered for those with shortened life expectancies, a known history of severe hypoglycemia or hypoglycemia unawareness, CKD, and children.”

“In cases where avoidance of development of DKD has failed, the second approach is slowing disease progression. The most important therapeutic issues at this stage are control of hypertension and hyperglycemia. […] Hypertension is present in up to 85 % of patients with DN/ DKD, depending on the duration and stage (e.g., higher in more progressive cases). […] In a recent meta-analysis, the efficacy and safety of blood pressure-lowering agents in adults with diabetes and kidney disease was analyzed […] In total, 157 studies comprising 43,256 participants, mostly with type 2 diabetes and CKD, were included in the network meta-analysis. No drug regimen was found to be more effective than placebo for reducing all-cause mortality. […] DKD is accompanied by abnormalities in lipid metabolism related to decline in kidney function. The association between higher low-density lipoprotein cholesterol (LDL-C) and risk of myocardial infarction is weaker for people with lower baseline eGFR, despite higher absolute risk of myocardial infarction [53]. Thus, increased LDL-C seems to be less useful as a marker of coronary risk among people with CKD than in the general population.”

“An analysis of the USRDS data revealed an RR of 0.27 (95 % CI, 0.24–0.30) 18 months after transplantation in patients with diabetes in comparison to patients on dialysis on a transplant waiting list [76]. The gain in projected years of life with transplantation amounted to 11 years in patients with DKD in comparison to patients without transplantation.”

October 27, 2016 Posted by | books, diabetes, medicine, Pharmacology | Leave a comment

Water Supply in Emergency Situations (I)

I didn’t think much of this book (here’s my goodreads review), but I did learn some new things from reading it. Some of the coverage in the book overlapped a little bit with stuff I’d read before, e.g. coverage provided in publications such as Rodricks and Fong and Alibek, but I read those books in 2013 and 2014 respectively (so I’ve already forgot a great deal) and most of the stuff in the book was new stuff. Below I’ve added a few observations and data from the first half of the publication.

“Mediterranean basin demands for water are high. Today, the region uses around 300 billion cubic meters per year. Two thirds of Mediterranean countries now use over 500  per year per inhabitant mainly because of heavy use of irrigation. But these per capita demands are irregular and vary across a wide range – from a little over 100 to more than 1,000 per year. Globally, demand has doubled since the beginning of the 20th century and increased by 60% over the last 25 years. […] the Middle East ecosystems […]  populate some 6% of the world population, but have only some 1% of its renewable fresh water. […] Seasonality of both supply and demand due to tourism […] aggravate water resource problems. During the summer months, water shortages become more frequent. Distribution networks left unused during the winter period face overload pressures in the summer. On the other hand, designing the system with excess capability to satisfy tourism-related summer peak demands raises construction and maintenance costs significantly.”

“There are over 30,000 km of mains within London and over 30% of these are over 150 years old, they serve 7.5 million people with 2,500 million liters of water a day.”

“A major flooding of the Seine River would have tremendous consequences and would impact very significantly the daily life of the 10 million people living in the Parisian area. A deep study of the impacts of such a catastrophic natural hazard has recently been initiated by the French authorities. […] The rise of the water level in the Seine during the last two major floods occurred slowly over several weeks which may explain their low number of fatalities: 50 deaths in 1658 and only one death in 1910. The damage and destruction to buildings and infrastructure, and the resulting effect on economic activity were, however, of major proportions […] Dams have been constructed on the rivers upstream from Paris, but their capacity to stock water is only 830 million cubic meters, which would be insufficient when compared to the volume of 4 billion cubic meters of water produced by a big flood. […] The drinkable water supply system in Paris, as well as that of the sewer network, is still constrained by the decisions and orientations taken during the second half of the 19th century during the large public works projects realized under Napoleon III. […] two of the three water plants which treat river water and supply half of Paris with drinkable water existed in 1910. Water treatment technology has radically changed, but the production sites have remained the same. New reservoirs for potable water have been added, but the principles of distribution have not changed […] The average drinking water production in Paris is 615,000 /day.”

They note in the chapter from which the above quotes are taken that a flood comparable to that which took place in 1910 would in 2005 have resulted in 20% of the surface of Paris being flooded, and 600.000 people being without electricity, among other things. The water distribution system currently in place would also be unable to deal with the load, however a plan for how to deal with this problem in an emergency setting does exist. In that context it’s perhaps worth noting that Paris is hardly unique in terms of the structure of the distribution system – elsewhere in the book it is observed that: “The water infrastructure developed in Europe during the 19th century and still applied, is almost completely based on options of centralized systems: huge supply and disposal networks with few, but large waterworks and sewage treatment plants.” Having both centralized and decentralized systems working at the same time/in the same area tends to increase costs, but may also lower risk; it’s observed in the book during the coverage of an Indonesian case-study that in that region the centralized service provider may take a long time to repair broken water pipes, which is … not very nice if you live in a tropical climate and prefer to have drinking water available to you.

“Water resources management challenges differ enormously in Romania, depending on the type of human settlement. The spectrum of settlement types stretches from the very low-density scattered single dwellings found in rural areas, through villages and small towns, to the much more dense and crowded cities. […] Water resources management will always face the challenge of balancing the needs of different water users. This is the case both in large urban or relatively small rural communities. The water needs of the agricultural production, energy and industrial sectors are often in competition. […] Romania’s water resources are relatively poor and unequally distributed in time and space […] There is a vast differential between urban and rural settlements when it comes to centralized drinking water systems; all the 263 municipalities and towns have such systems, while only 17% of rural communities benefit from this service. […] In Braila and Harghita counties, no village has a sewage network, and Giurgiu and Ialomita counties have only one a piece each. Around 47 of the largest cities which do not have wastewater treatment plants (Bucharest, Braila, Craiova, Turnu Severin Tulcea, etc.) produce ∼20 /s of wastewater, which is directly discharged untreated into surface water.”

There is a difference in quality between water from centralized and decentralized supply systems [in the Ukraine (and likely elsewhere as well)]. Water quality in decentralized systems is the worst (some 30% of samples fail to meet standards, compared to 5.7% in the centralized supply). […] The Sanitary epidemiological stations draw random samples from 1,139 municipal, 6,899 departmental, and 8,179 rural pipes, and from 158,254 points of decentralized water supply, including 152,440 wells, 996 springs, and 4,818 artesian wells. […] From the first day following the accident at Chernobyl Nuclear Power Plant (ChNPP), one of the most serious problems was to prevent general contamination of the Dnieper water system and to guarantee safe water consumption for people living in the affected zone. The water protection and development of monitoring programs for the affected water bodies were among the most important post-accident countermeasures taken by the Government Bodies in Ukraine. […] To solve the water quality problem for Kiev, an emergency water intake at the Desna River was constructed within a very short period. […] During 1986 and the early months of 1987, over 130 special filtration dams […] with sorbing screens containing zeolite (klinoptilolite) were installed for detaining radionuclides while letting the water through. […] After the spring flood of 1987, the construction of new dams was terminated and the decision was made to destroy most of the existing dams. It was found that the 90Sr concentration reduction by the dams studied was insignificant […] Although some countermeasures and cleanup activities applied to radionuclides sources on catchments proved to have positive effects, many other actions were evaluated as ineffective and even useless. […] The most effective measures to reduce radioactivity in drinking water are those, which operate at the water treatment and distribution stage.

“Diversification and redundancy are important technical features to make infrastructure systems less vulnerable to natural and social (man-made) hazards. […] risk management does not only encompass strategies to avoid the occurrence of certain events which might lead to damages or catastrophes, but also strategies of adaptation to limit damages.

The loss of potable water supply typically leads to waterborne diseases, such as typhus and cholera.”

Water velocity in a water supply system is about 1 \s. Therefore, time is a primordial factor in contamination spread along the system. In order to minimize the damage caused by contamination of water, it is essential to act with maximum speed to achieve minimum spread of the contaminant”

October 21, 2016 Posted by | books, Geography | Leave a comment