Econstudentlog

Astrophysics

Here’s what I wrote about the book on goodreads:

“I think the author was trying to do too much with this book. He covers a very large number of topics, but unfortunately the book is not easy to read because he covers in a few pages topics which other authors write entire books about. If he’d covered fewer topics in greater detail I think the end result would have been better. Despite having watched a large number of lectures on related topics and read academic texts about some of the topics covered in the book, I found the book far from easy to read, certainly compared to other physics books in this series (the books about nuclear physics and particle physics are both significantly easier to read, in my opinion). The author sometimes seemed to me to have difficulties understanding how large the potential knowledge gap between him and the reader of the book might be.

Worth reading if you know some stuff already and you’re willing to put in a bit of work, but don’t expect too much from the coverage.”

I gave the book two stars on goodreads.

I decided early on while reading the book that the only way I was going to cover this book at all here would be by posting a link-heavy post. I have added some quotes as well, but most of what’s going on in this book I’ll only cover by adding some relevant links to wiki articles dealing with these topics – as the link collection below should illustrate, although the subtitle of the book is ‘A Very Short Introduction’ it actually covers a great deal of ground (…too much ground, that’s part of the problem, as indicated above…). There are a lot of links because it’s just that kind of book.

First, a few quotes from the book:

“In thinking about the structure of an accretion disc it is helpful to imagine that it comprises a large number of solid rings, each of which spins as if each of its particles were in orbit around the central mass […] The speed of a circular orbit of radius r around a compact mass such as the Sun or a black hole is proportional to 1/r, so the speed increases inwards. It follows that there is shear within an accretion disc: each rotating ring slides past the ring just outside it, and, in the presence of any friction or viscosity within the fluid, each ring twists or torques the ring just outside it in the direction of rotation, trying to get it to rotate faster.

Torque is to angular momentum what force is to linear momentum: the quantity that sets its rate of change. Just as Newton’s laws yield that force is equal to rate of change of momentum, the rate of change of a body’s angular momentum is equal to the torque on the body. Hence the existence of the torque from smaller rings to bigger rings implies an outward transport of angular momentum through the accretion disc. When the disc is in a steady state this outward transport of angular momentum by viscosity is balanced by an inward transport of angular momentum by gas as it spirals inwards through the disc, carrying its angular momentum with it.”

“The differential equations that govern the motion of the planets are easily written down, and astronomical observations furnish the initial conditions to great precision. But with this precision we can predict the configuration of the planets only up to ∼ 40 Myr into the future — if the initial conditions are varied within the observational uncertainties, the predictions for 50 or 60 Myr later differ quite significantly. If you want to obtain predictions for 60 Myr that are comparable in precision to those we have for 40 Myr in the future, you require initial conditions that are 100 times more precise: for example, you require the current positions of the planets to within an error of 15m. If you want comparable predictions 60.15Myr in the future, you have to know the current positions to within 15mm.”

“An important feature of the solutions to the differential equations of the solar system is that after some variable, say the eccentricity of Mercury’s orbit, has fluctuated in a narrow range for millions of years, it will suddenly shift to a completely different range. This behaviour reflects the importance of resonances for the dynamics of the system: at some moment a resonant condition becomes satisfied and the flow of energy within the system changes because a small disturbance can accumulate over thousands or millions of cycles into a large effect. If we start the integrations from a configuration that differs ever so little from the previous configuration, the resonant condition will fail to be satisfied, or be satisfied much earlier or later, and the solutions will look quite different.”

“In Chapter 4 we saw that the physics of accretion discs around stars and black holes is all about the outward transport of angular momentum, and that moving angular momentum outwards heats a disc. Outward transport of angular momentum is similarly important for galactic discs. […] in a gaseous accretion disc angular momentum is primarily transported by the magnetic field. In a stellar disc, this job has to be done by the gravitational field because stars only interact gravitationally. Spiral structure provides the gravitational field needed to transport angular momentum outwards.

In addition to carrying angular momentum out through the stellar disc, spiral arms regularly shock interstellar gas, causing it to become denser, and a fraction of it to collapse into new stars. For this reason, spiral structure is most easily traced in the distribution of young stars, especially massive, luminous stars, because all massive stars are young. […] Spiral arms are waves of enhanced star density that propagate through a stellar disc rather as sound waves propagate through air. Like sound waves they carry energy, and this energy is eventually converted from the ordered form it takes in the wave to the kinetic energy of randomly moving stars. That is, spiral arms heat the stellar disc.”

“[I]f you take any reasonably representative group of galaxies, from the group’s luminosity, you can deduce the quantity of ordinary matter it should contain. This quantity proves to be roughly ten times the amount of ordinary matter that’s in the galaxies. So most ordinary matter must lie between the galaxies rather than within them.”

“The nature of a galaxy is largely determined by three numbers: its luminosity, its bulge-to-disc ratio, and the ratio of its mass of cold gas to the mass in stars. Since stars form from cold gas, this last ratio determines how youthful the galaxy’s stellar population is.

A youthful stellar population contains massive stars, which are short-lived, luminous, and blue […] An old stellar population contains only low-mass, faint, and red stars. Moreover, the spatial distribution of young stars can be very lumpy because the stars have not had time to be spread around the system […] a galaxy with a young stellar population looks very different from one with an old population: it is more lumpy/streaky, bluer, and has a higher luminosity than a galaxy of similar stellar mass with an old stellar population.”

Links:

Accretion disk.
Supermassive black hole.
Quasar.
Magnetorotational instability.
Astrophysical jet.
Herbig–Haro object.
SS 433.
Cygnus A.
Collimated light.
Light curve.
Lyman-alpha line.
Balmer series.
Star formation.
Stellar evolution.
Black-body radiation.
Helium flash.
White dwarf (featured article).
Planetary nebula.
Photosphere.
Corona.
Solar transition region.
Photodissociation.
Carbon detonation.
X-ray binary.
Inverse Compton scattering.
Microquasar.
Quasi-periodic oscillation.
Urbain Le Verrier.
Perturbation theory.
Elliptic orbit.
Precession.
Axial precession.
Libration.
Orbital resonance.
Jupiter trojan (featured article).
Late Heavy Bombardment.
Exoplanet.
Lorentz factor.
Radio galaxy.
Gamma-ray burst (featured article).
Cosmic ray.
Hulse–Taylor binary.
Special relativity.
Lorentz covariance.
Lorentz transformation.
Muon.
Relativistic Doppler effect.
Superluminal motion.
Fermi acceleration.
Shock waves in astrophysics.
Ram pressure.
Synchrotron radiation.
General relativity (featured article).
Gravitational redshift.
Gravitational lens.
Fermat’s principle.
SBS 0957+561.
Strong gravitational lensing/Weak gravitational lensing.
Gravitational microlensing.
Shapiro delay.
Gravitational wave.
Dark matter.
Dwarf spheroidal galaxy.
Luminosity function.
Lenticular galaxy.
Spiral galaxy.
Disc galaxy.
Elliptical galaxy.
Stellar dynamics.
Constant of motion.
Bulge (astronomy).
Interacting galaxy.
Coma cluster.
Galaxy cluster.
Anemic galaxy.
Decoupling (cosmology).

June 20, 2017 Posted by | Astronomy, Books, Physics | Leave a comment

Quotes

(The Pestallozzi quotes below are from The Education of Man, a short and poor aphorism collection I can not possibly recommend despite the inclusion of quotes from it in this post.)

i. “Only a good conscience always gives man the courage to handle his affairs straightforwardly, openly and without evasion.” (Johann Heinrich Pestalozzi)

ii. “An intimate relationship in its full power is always a source of human wisdom and strength in relationships less intimate.” (-ll-)

iii. “Whoever is unwilling to help himself can be helped by no one.” (-ll-)

iv. “He who has filled his pockets in the service of injustice will have little good to say on behalf of justice.” (-ll-)

v. “It is Man’s fate that no one knows the truth alone; we all possess it, but it is divided up among us. He who learns from one man only, will never learn what the others know.” (-ll-)

vi. “No scoundrel is so wicked that he cannot at some point truthfully reprove some honest man” (-ll-)

vii. “The man too keenly aware of his good reputation is likely to have a bad one.” (-ll-)

viii. “Many words make an excuse anything but convincing.” (-ll-)

ix. “Fashions are usually seen in their true perspective only when they have gone out of fashion.” (-ll-)

x. “A thing that nobody looks for is seldom found.” (-ll-)

xi. “Many discoveries must have been stillborn or smothered at birth. We know only those which survived.” (William Ian Beardmore Beveridge)

xii. “Time is the most valuable thing a man can spend.” (Theophrastus)

xiii. “The only man who makes no mistakes is the man who never does anything.” (Theodore Roosevelt)

xiv. “It is hard to fail, but it is worse never to have tried to succeed.” (-ll-)

xv. “From their appearance in the Triassic until the end of the Creta­ceous, a span of 140 million years, mam­mals remained small and inconspicuous while all the ecological roles of large ter­restrial herbivores and carnivores were monopolized by dinosaurs; mammals did not begin to radiate and produce large species until after the dinosaurs had al­ready become extinct at the end of the Cretaceous. One is forced to conclude that dinosaurs were competitively su­perior to mammals as large land vertebrates.” (Robert T. Bakker)

xvi. “Plants and plant-eaters co-evolved. And plants aren’t the passive partners in the chain of terrestrial life. […] A birch tree doesn’t feel cosmic fulfillment when a moose munches its leaves; the tree species, in fact, evolves to fight the moose, to keep the animal’s munching lips away from vulnerable young leaves and twigs. In the final analysis, the merciless hand of natural selection will favor the birch genes that make the tree less and less palatable to the moose in generation after generation. No plant species could survive for long by offering itself as unprotected fodder.” (-ll-)

xvii. “… if you look at crocodiles today, they aren’t really representative of what the lineage of crocodiles look like. Crocodiles are represented by about 23 species, plus or minus a couple. Along that lineage the more primitive members weren’t aquatic. A lot of them were bipedal, a lot of them looked like little dinosaurs. Some were armored, others had no teeth. They were all fully terrestrial. So this is just the last vestige of that radiation that we’re seeing. And the ancestor of both dinosaurs and crocodiles would have, to the untrained eye, looked much more like a dinosaur.” (Mark Norell)

xviii. “If we are to understand the interactions of a large number of agents, we must first be able to describe the capabilities of individual agents.” (John Henry Holland)

xix. “Evolution continually innovates, but at each level it conserves the elements that are recombined to yield the innovations.” (-ll-)

xx. “Model building is the art of selecting those aspects of a process that are relevant to the question being asked. […] High science depends on this art.” (-ll-)

June 19, 2017 Posted by | Biology, Books, Botany, Evolutionary biology, Paleontology, Quotes/aphorisms | Leave a comment

Computer Science

I have enjoyed the physics books I’ve recently read in the ‘…A very short introduction’-series by Oxford University Press, so I figured it might make sense to investigate whether the series also has some decent coverage of other areas of research. I must however admit that I didn’t think too much of Dasgupta’s book. I think the author was given a very tough task. Having an author write a decent short book on a reasonably well-defined sub-topic of physics makes sense, whereas having him write the same sort of short and decent book about the entire field of ‘physics’ is a different matter. In some sense something analogous to this was what Dasgupta had been asked to do(/had undertaken to do?). Of course computer science is a relatively new field so arguably the analogy doesn’t completely hold; even if you cover every major topic in computer science there might still be significantly less ground to cover here than there would be, had he been required to cover everything from Newton (…Copernicus? Eudoxus of Cnidus? Gan De?) to modern developments in M-theory, but the main point stands; the field is much too large for a book like this to do more than perhaps very carefully scratch the surfaces of a few relevant subfields, making the author’s entire endeavour exceedingly difficult to pull off successfully. I noted while reading the book that document searches for ‘graph theory’ and ‘discrete mathematics’ yielded zero results, and I assume that many major topics/areas of relevance are simply not mentioned at all, which to be fair is but to be expected considering the format of the book. The book could have been a lot worse, but it wasn’t all that great – I ended up giving it two stars on goodreads.

My coverage of the book here on the blog will be relatively lazy: I’ll only include links in this post, not quotes from the book – I looked up a lot of links to coverage of relevant concepts and topics also covered in the book while reading it, and I have added many of these links below. The links should give you some idea of which sort of topics are covered in the publication.

Church–Turing thesis.
Turing machine.
Automata theory.
Algorithm.
Donald Knuth.
Procedural knowledge.
Machine code.
Infix notation.
Polish notation.
Time complexity.
Linear search.
Big O notation.
Computational complexity theory.
P versus NP problem.
NP-completeness.
Programming language.
Assembly language.
Hardware description language.
Data type (computer science).
Statement (computer science).
Instruction cycle.
Assignment (computer science).
Computer architecture.
Control unit.
Computer memory.
Memory buffer register.
Cache (computing).
Parallel computing (featured article).
Instruction pipelining.
Amdahl’s law.
FCFS algorithm.
Exact algorithm.
Artificial intelligence.
Means-ends analysis.

June 8, 2017 Posted by | Books, Computer science | Leave a comment

Nuclear physics

Below I have posted a few observations from the book, as well as a number of links to coverage of other topics mentioned/covered in the book. It’s a good book, the level of coverage is very decent considering the format of the publication.

“Electrons are held in place, remote from the nucleus, by the electrical attraction of opposite charges, electrons being negatively and the atomic nucleus positively charged. A temperature of a few thousand degrees is sufficient to break this attraction completely and liberate all of the electrons from within atoms. Even room temperature can be enough to release one or two; the ease with which electrons can be moved from one atom to another is the source of chemistry, biology, and life.”

“Quantum mechanics explains the behaviour of electrons in atoms, and of nucleons in nuclei. In an atom, electrons cannot go just where they please, but are restricted like someone on a ladder who can only step on individual rungs. When an electron drops from a rung with high energy to one that is lower down, the excess energy is carried away by a photon of light. The spectrum of these photons reveals the pattern of energy levels within the atom. Similar constraints apply to nucleons in nuclei. Nuclei in excited states, with one or more protons or neutrons on a high rung, also give up energy by emitting photons. The main difference between what happens to atomic electrons relative to atomic nuclei is the nature of the radiated light. In the former the light may be in the visible spectrum, whose photons have relatively low energy, whereas in the case of nuclei the light consists of X-rays and gamma rays, whose photons have energies that are millions of times greater. This is the origin of gamma radioactivity.”

“[A]ll particles that feel the strong interaction are made of quarks. […] Quarks that form nuclear particles come in two flavours, known as up (u) or down (d), with electrical charges that are fractions, +2/3 or −1/3 respectively, of a proton’s charge. Thus uud forms a proton and ddu a neutron. In addition to electrical charge, quarks possess another form of charge, known as colour. This is the fundamental source of the strong nuclear force. Whereas electric charge occurs in some positive or negative numerical amount, for colour charge there are three distinct varieties of each. These are referred to as red, green, or blue, by analogy with colours, but are just names and have no deeper significance. […] colour charge and electric charge obey very similar rules. For example, analogous to the behaviour of electric charge, colour charges of the same colour repel, whereas different colours can attract […]. A proton or neutron is thus formed when three quarks, each with a different colour, mutually attract one another. In this configuration the colour forces have neutralized, analogous to the way that positive and negative charges neutralize within an atom.”

“The relativistic quantum theory of colour is known as quantum chromodynamics (QCD). It is similar in spirit to quantum electrodynamics (QED). QED implies that the electromagnetic force is transmitted by the exchange of massless photons; by analogy, in QCD the force between quarks, within nucleons, is due to the exchange of massless gluons.”

“In a nutshell, the quarks in heavy nuclei are found to have, on average, slightly lower momenta than in isolated protons or neutrons. In spatial terms, this equates with the interpretation that individual quarks are, on average, less confined than in free nucleons. […] The overall conclusion is that the quarks are more liberated in nuclei when in a region of relatively high density. […] This interpretation of the microstructure of atomic nuclei suggests that nuclei are more than simply individual nucleons bound by the strong force. There is a tendency, under extreme pressure or density, for them to merge, their constituent quarks freed to flow more liberally. […] This freeing of quarks is a liberation of colour charges, and in theory should happen for gluons also. Thus, it is a precursor of what is hypothesized to occur within atomic nuclei under conditions of extreme temperature and pressure […] atoms are unable to survive at high temperatures and pressure, as in the sun for example, and their constituent electric charges—electrons and protons—flow independently as electrically charged gases. This is a state of matter known as plasma. Analogously, under even more extreme conditions, the coloured quarks are unable to configure into individual neutrons and protons. Instead, the quarks and gluons are theorized to flow freely as a quark–gluon plasma (QGP).”

“The mass of a nucleus is not simply the sum of the masses of its constituent nucleons. […] some energy is taken up to bind the nucleus together. This ‘binding energy’ is the difference between the mass of the nucleus and its constituents. […] The larger the binding energy, the greater is the propensity for the nucleus to be stable. Its actual stability is often determined by the relative size of the binding energy of the nucleus to that of its near neighbours in the periodic table of elements, or of other isotopes of the original elemental nucleus. As nature seeks stability by minimizing energy, a nucleus will seek to lower the total mass, or equivalently, to increase the binding energy. […] An effective guide to stability, and the pattern of radioactive decays, is given by the semi-empirical mass formula (SEMF).”

“For light nuclei the binding energy grows with A [the mass of the nucleus – US] until electrostatic repulsion takes over in large nuclei. […] At large values of Z [# of protons – US], the penalty of electrostatic charge, which extends throughout the nucleus, requires further neutrons to add to the short range attraction in compensation. Eventually, for Z > 82, the amount of electrostatic repulsion is so large that nuclei cannot remain stable, even when they have large numbers of neutrons. […] All nuclei heavier than lead are radioactive.”

“Three minutes after the big bang, the material universe consisted primarily of the following: 75% protons; 24% helium nuclei; a small number of deuterons; traces of lithium, beryllium, and boron, and free electrons. […] 300,000 years later, the ambient temperature had fallen below 10,000 degrees, that is similar to or cooler than the outer regions of our sun today. At these energies the negatively charged electrons were at last able to be held fast by electrical attraction to the positively charged atomic nuclei whereby they combined to form neutral atoms. Electromagnetic radiation was set free and the universe became transparent as light could roam unhindered across space.
The big bang did not create the elements necessary for life, such as carbon, however. Carbon is the next lightest element after boron, but its synthesis presented an insuperable barrier in the very early universe. The huge stability of alpha particles frustrates attempts to make carbon by collisions between any pair of lighter isotopes. […] Thus no carbon or heavier isotopes were formed during big bang nucleosynthesis. Their synthesis would require the emergence of stars.”

“In the heat of the big bang, quarks and gluons swarmed independently in quark–gluon plasma. Inside the sun, relatively cool, they form protons but the temperature is nonetheless too high for atoms to survive. Thus inside the sun, electrons and protons swarm independently as electrical plasma. It is primarily protons that fuel the sun today. […] Protons can bump into one another and initiate a set of nuclear processes that eventually converts four of them into helium-4 […] As the energy mc² locked into a single helium-4 nucleus is less than that in the original four protons, the excess is released into the surroundings, some of it eventually providing warmth here on earth. […] because the sun produces these reactions continuously over aeons, unlike big bang nucleosynthesis, which lasted mere minutes, unstable isotopes, such as tritium, play no role in solar nucleosynthesis.”

“Although individual antiparticles are regularly produced from the energy in collisions between cosmic rays, or in accelerator laboratories such as at CERN, there is no evidence for antimatter in bulk in the universe at large. […] To date, all the evidence is that the universe at large is made of matter to the exclusion of antimatter. […] One of the great mysteries in physics is how the symmetry between matter and antimatter was disturbed.”

Some links:

Nuclear physics.
Alpha decay/beta decay/gamma radiation.
Positron emission.
Isotope.
Rutherford model.
Bohr model.
Spin.
Nucleon.
Nuclear fission.
X-ray crystallography.
Pion.
EMC effect.
Magic number.
Cosmic ray spallation.
Asymptotic giant branch.
CNO cycle.
Transuranium elements.
Actinide.
Island of stability.
Transfermium Wars.
Nuclear drip line.
Halo nucleus.
Hyperon/hypernucleus.
Lambda baryon.
Strangelet.
Quark star.
Antineutron.
Radiation therapy.
Rutherford backscattering spectrometry.
Particle-induced X-ray emission.

June 5, 2017 Posted by | Books, Physics | Leave a comment

Words

Almost all the words included in this post are words I encountered while reading the Flashman novels Flashman and the Mountain of Light, Flash for Freedom!, and Flashman and the Redskins. Almost all the words are words I have not included in similar posts in the past, but I decided to include a few words (2 or 3 words, something like that) I already included in similar posts in the past because I like those words and the fact that I had taken notice of them while reading these novels indicates to me that they haven’t yet stuck in my mind the way I’d like them to do; I usually only mark out words with which I’m either unfamiliar or words the meaning of which I have trouble remembering.

The post includes 6 segments of 20 words/concepts each.

Duff. Coparcener. Chunter. Haver. Sop. Purdah. Bedewing. Paynim. Conniptions. Pap. Tiffin. Aigrette. Whippet. Grandee. Caparison. Howdah. Mahout. Malediction. Tipple. Slantendicular.

Collogue. Hocussing. Sobersided/sobersides. Grog. Ramage. Hutment. Peradventure. Truckle. Caracole. Hustings. Gamester. Barracoon. Bowsprit. Gorget. Midge. Mumchance. Kurbash. Mudge. Unchancy. Mizzenmast/mizzen.

Wiseacre. Cully. SibilantHummock. Gloaming. Clew. Bestride. Dragoman. Lanyard. Binnacle. Stevedore. Corn pone/pone. Bawd. Spavin. Plaintiff. Wickiup. Julep. Holystone. Crimp. Melodeon.

Bitumen. Reticule. Roustabout. Teamster (Interestingly what this word means seems to have changed over time. In the Flashman setting the word is used to describe someone who’s handling teams of slaves; i.e. a slave driver). Serape. Crupper. Stockman. Carter. Clodpole. Tenderfoot. Chevron. Doss. Coonskin. Roué. Bight. Ferrule. Bodkin. Pelf. Pother. Ford.

Concourse. Dixie. Tobyman. Kedgeree. Prepossess. Rivet. Clubbable. Bower. Pottle. Clog. Waft. Lariat. Bargee. Gallus. Navvy. Papoose. Levee. Minatory. Wend. Statuary.

Fustian. Blatherskite. Escritoire. Twanging. Tippet. Wanton. Convivial. Blandishment. Quirt. Coulee. Guidon. Sorrel. Arrant. Contumelious. Depilation. Magnate. Vatic. Grimalkin. Manciple. Banns.

May 2, 2017 Posted by | Books, language | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Biodemography of aging (III)

Latent class representation of the Grade of Membership model.
Singular value decomposition.
Affine space.
Lebesgue measure.
General linear position.

The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.

I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.

This post includes some observations and quotes from the last chapters of the book’s Part 1.

“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”

“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models  (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”

As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.

Moving on…

“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”

“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”

“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”

“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”

“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”

“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.

“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”

“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”

April 17, 2017 Posted by | Biology, Books, Cancer/oncology, Demographics, Economics, Epidemiology, Genetics, Medicine, Statistics | Leave a comment

Words

Lately I’ve been reading some of George MacDonald Fraser’s Flashman books, which have been quite enjoyable reads in general; I’m reading the books in the order in which the actions in the books supposedly took place, not in the order in which the books were published, and a large number of the words included below are words I encountered in the first three of the books I read (i.e. FlashmanRoyal Flash, and Flashman’s Lady); I decided the post already at that point included a large number of words (the post includes roughly 120 words), so I saw no need to add additional words from the other books in the series in this post as well. I have reviewed a few of the Flashman books I’ve read on goodreads here, here, and here.

Havildar, gimbal, quorum, unmannerly, tribulation, thalassophobia, kiln, sheave, grody, contemn, arcanum, deloping, poulterer, fossorial, catamount, guttersnipe, nabob, frond, matelot, jetty.

Sangar, palliasse, junoesque, cornet, bugle, fettle, toady, thong, trollop, sepoy, wattle, hardtack, snuffle, chunter, ghillie, barker, trousseau, simper, madcap, ramrod.

Welt, landau, declaim, burgomaster, scupper, windlass, maunder, sniffy, sirdar, randy, dowager, toffs, pug, curvet, pish, scriveners, hoyden, manikin, lecher/lechery, busby.

Ruck, leery, ninny, shillyshally, mincing, ringlet, covey, pip, munshi, risaldar, maidan, palankeen/palanquin, forbye, feringhee, cantonment, puggaree, pannikin, dollymop, snook, cordage.

Suet/suety, strumpet, kenspeckle, magsman, scrag, chandler, prigger, chivvy, décolleté, dundrearies, assignation, bruit, purblind, trull, slatterncoffle, doggo, cellarette, cummerbund, agley.

Sampan, wideawake, popsycollation, déshabillé, pinnace, pennant, murk, sprig, linstock, tassel, bangle, trammel, prau, shellback, shako, clobbertaffrail, crinolinetaffeta, commonalty.

April 15, 2017 Posted by | Books, language | Leave a comment

Quotes

All the quotes included in this post are from The Faber Book of Aphorisms, which I am currently reading.

i. “It is never any good dwelling on good-bys. It is not the being together that it prolongs, it is the parting.” (Elizabeth Bibesco)

ii. “Good manners are made up of petty sacrifices.” (Ralph Waldo Emerson)

iii. “One learns taciturnity best among people without it, and loquacity among the taciturn.” (Jean Paul Richter)

iv. “A man never reveals his character more vividly than when portraying the character of another.” (-ll-)

v. “That we seldom repent of talking too little and very often of talking too much is a … maxim that everybody knows and nobody practices.” (Jean de La Bruyère)

vi. “Never trust a man who speaks well of everybody.” (John Churton Collins)

vii. “People not used to the world … are unskillful enough to show what they have sense enough not to tell.” (Philip Dormer Stanhope, 4th Earl of Chesterfield)

viii. “To most men, experience is like the stern lights of a ship, which illumine only the track it has passed.” (Samuel Taylor Coleridge)

ix. “Those who know the least obey the best.” (George Farquhar)

x. “Monkeys are superior to men in this: when a monkey looks into a mirror, he sees a monkey.” (Malcolm de Chazal)

xi. “It can be shown that a mathematical web of some kind can be woven about any universe containing several objects. The fact that our universe lends itself to mathematical treatment is not a fact of any great philosophical significance.” (Bertrand Russell)

xii. “You can change your faith without changing gods, and vice versa.” (Stanisław Jerzy Lec)

xiii. “Religion is the masterpiece of the art of animal training, for it trains people as to how they shall think.” (Arthur Schopenhauer)

xiv. “The vanity of being known to be trusted with a secret is generally one of the chief motives to disclose it.” (Samuel Johnson)

xv. “No man is exempt from saying silly things; the mischief is to say them deliberately.” (Michel de Montaigne)

xvi. “Many promising reconciliations have broken down because, while both parties came prepared to forgive, neither party came prepared to be forgiven.” (Charles Williams)

xvii. “Ambition is pitiless. Any merit that it cannot use it finds despicable.” (Joseph Joubert)

xviii. “Experience is the name everyone gives to his mistakes.” (Oscar Wilde)

xix. “Nothing is enough to the man for whom enough is too little.” (Epicurus)

xx. “To measure up to all that is demanded of him, a man must overestimate his capacities.” (Johann Wolfgang von Goethe)

March 27, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Words

Over the last couple of weeks I’ve been reading James Herriot’s books and yesterday I finished the last one in the series. The five books (or 8, if you’re British – see the wiki…) I read – I skipped the ‘dog stories’ publication on that list because that book is just a collection of stories included in the other books – contain almost 2500 pages (2479, according to the goodreads numbers provided in the context of the editions I’ve been reading), and they also contained quite a few unfamiliar/nice words and expressions, many of which are included below. If you’re curious about the Herriot books you can read my goodreads reviews of the books here, here (very short), here, and here (I didn’t review The Lord God Made Them All).

Eversionskeevy, censerknout, byreelectuary, trocar/trocarization, clogirascible, gilt, curvet, bullock, niggle, scapegrace, cur, pantile, raddle, scamper, skitter, odoriferous.

Dewlap, seton, muzzy, stirk, shillelagh, borborygmi, omentum, fettle, guddle, cruciate, peduncle/pedunculated, ecraseur, curlew, gabble, gable, festoon, cornada, lambent, lank.

Lope, billet, casement, scree, caliper, dale, stoup, puisne, tumefy, scamp, probang, famble, footling, colostrum, towsle/tousle, loquacious, dapper, cob, meconium, locum.

Mullion, roan, slat, dustman, carvery, abomasum, rostrum, zareba, flithackle, tympanites, pewter, opisthotonos, concertina, miliarylief, spay, otodectic.

March 24, 2017 Posted by | Books, language | Leave a comment

Quotes

i. “Fraud and falsehood only dread examination. Truth invites it.” (Thomas Cooper)

ii. “However well equipped our language, it can never be forearmed against all possible cases that may arise and call for description: fact is richer than diction.” (J. L. Austin)

iii. “There is no loneliness like the loneliness of crowds, especially to those who are unaccustomed to them.” (H. Rider Haggard)

iv. “All men are moral. Only their neighbors are not.” (John Steinbeck)

v. “The unfortunate thing is that, because wishes sometimes come true, the agony of hoping is perpetuated.” (Marguerite Cleenewerck de Crayencour)

vi. “All cruel people describe themselves as paragons of frankness.” (Tennessee Williams)

vii. “If you do not have the capacity for happiness with a little money, great wealth will not bring it to you.” (William Feather)

viii. “Anyone who can think clearly can write clearly. But neither is easy.” (-ll-)

ix. “No one’s reputation is quite what he himself perceives it ought to be.” (Christopher Vokes)

x. “[T]he question is not how to avoid procrastination, but how to procrastinate well. There are three variants of procrastination, depending on what you do instead of working on something: you could work on (a) nothing, (b) something less important, or (c) something more important. That last type, I’d argue, is good procrastination.” (Paul Graham)

xi. “At every period of history, people have believed things that were just ridiculous, and believed them so strongly that you risked ostracism or even violence by saying otherwise. If our own time were any different, that would be remarkable. As far as I can tell it isn’t.” (-ll-)

xii. “There can be no doubt that the knowledge of logic is of considerable practical importance for everyone who desires to think and infer correctly.” (Alfred Tarski)

xiii. “Logic and truth are two very different things, but they often look the same to the mind that’s performing the logic.” (Theodore Sturgeon)

xiv.”I don’t like it; I can’t approve of it; I have always thought it most regrettable that earnest and ethical Thinkers like ourselves should go scuttling through space in this undignified manner. Is it seemly that I, at my age, should be hurled with my books of reference, and bed-clothes, and hot-water bottle, across the sky at the unthinkable rate of nineteen miles a second? As I say, I don’t at all like it.” (Logan Pearsall Smith, All Trivia).

xv. “That we should practice what we preach is generally admitted; but anyone who preaches what he and his hearers practise must incur the gravest moral disapprobation.” (-ll-)

xvi. “Our names are labels, plainly printed on the bottled essense of our past behaviour.” (-ll-)

xvii. “It’s an odd thing about this Universe that though we all disagree with each other, we are all of us always in the right.” (-ll-)

xviii. “Those who say everything is pleasant and everyone delightful, come to the awful fate of believing what they say.” (-ll-)

xix. “He who goes against the fashion is himself its slave.” (-ll-)

xx. “When I read in the Times about India and all its problems and populations; when I look at the letters in large type of important personages, and find myself face to face with the Questions, Movements, and great Activities of the Age, ‘Where do I come in?’ I ask uneasily.
Then in the great Times-reflected world I find the corner where I play my humble but necessary part. For I am one of the unpraised, unrewarded millions without whom Statistics would be a bankrupt science. It is we who are born, who marry, who die, in constant ratios; who regularly lose so many umbrellas, post just so many unaddressed letters every year. And there are enthusiasts among us, Heroes who, without the least thought of their own convenience, allow omnibuses to run over them, or throw themselves, great-heartedly, month by month, in fixed numbers, from London bridges.” (-ll-)

March 9, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Biodemography of aging (II)

In my first post about the book I included a few general remarks about the book and what it’s about. In this post I’ll continue my coverage of the book, starting with a few quotes from and observations related to the content in chapter 4 (‘Evidence for Dependence Among Diseases‘).

“To compare the effects of public health policies on a population’s characteristics, researchers commonly estimate potential gains in life expectancy that would result from eradication or reduction of selected causes of death. For example, Keyfitz (1977) estimated that eradication of cancer would result in 2.265 years of increase in male life expectancy at birth (or by 3 % compared to its 1964 level). Lemaire (2005) found that the potential gain in the U.S. life expectancy from cancer eradication would not exceed 3 years for both genders. Conti et al. (1999) calculated that the potential gain in life expectancy from cancer eradication in Italy would be 3.84 years for males and 2.77 years for females. […] All these calculations assumed independence between cancer and other causes of death. […] for today’s populations in developed countries, where deaths from chronic non-communicable diseases are in the lead, this assumption might no longer be valid. An important feature of such chronic diseases is that they often develop in clusters manifesting positive correlations with each other. The conventional view is that, in a case of such dependence, the effect of cancer eradication on life expectancy would be even smaller.”

I think the great majority of people you asked would have assumed that the beneficial effect of hypothetical cancer eradication in humans on human life expectancy would be much larger than this, but that’s just an impression. I’ve seen estimates like these before, so I was not surprised – but I think many people would be if they knew this. A very large number of people die as a result of developing cancer today, but the truth of the matter is that if they hadn’t died from cancer they’d have died anyway, and on average probably not really all that much later. I linked to Richard Alexander’s comments on this topic in my last post about the book, and again his observations apply so I thought I might as well add the relevant quote from the book here:

“In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,”so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology.”

Remove one cause of death and there are plenty of others standing in line behind it. We already knew that; two hundred years ago one out of every four deaths in England was the result of tuberculosis, but developing treatments for tuberculosis and other infectious diseases did not mean that English people stopped dying; these days they just die from cardiovascular disease and cancer instead. Do note in the context of that quote that Alexander is talking about the maximum human lifetime, not average life expectancy; again, we know and have known for a long time that human technology can have a dramatic effect on the latter variable. Of course a shift in one distribution will be likely to have spill-over effects on the other (if more people are alive at the age of 70, the potential group of people also living on to reach e.g. 100 years is higher, even if the mortality rate for the 70-100 year old group did not change) the point is just that these effects are secondary effects and are likely to be marginal at best.

Anyway, some more stuff from the chapter. Just like the previous chapter in the book did, this one also includes analyses of very large data sets:

The Multiple Cause of Death (MCD) data files contain information about underlying and secondary causes of death in the U.S. during 1968–2010. In total, they include more than 65 million individual death certificate records. […] we used data for the period 1979–2004.”

There’s some formal modelling stuff in the chapter which I won’t go into in detail here, this is the chapter in which I encountered the comment about ‘the multivariate lognormal frailty model’ I included in my first post about the book. One of the things the chapter looks at are the joint frequencies of deaths from cancer and other fatal diseases; it turns out that there are multiple diseases that are negatively related with cancer as a cause of death when you look at the population-level data mentioned above. The chapter goes into some of the biological mechanisms which may help explain why these associations look the way they do, and I’ll quote a little from that part of the coverage. A key idea here is (as always..?) that there are tradeoffs at play; some genetic variants may help protect you against e.g. cancer, but at the same time increase the risk of other diseases for the same reason that they protect you against cancer. In the context of the relationship between cancer deaths and deaths from other diseases they note in the conclusion that: “One potential biological mechanism underlying the negative correlation among cancer and other diseases could be related to the differential role of apoptosis in the development of these diseases.” The chapter covers that stuff in significantly more detail, and I decided to add some observations from the chapter on these topics below:

“Studying the role of the p53 gene in the connection between cancer and cellular aging, Campisi (2002, 2003) suggested that longevity may depend on a balance between tumor suppression and tissue renewal mechanisms. […] Although the mechanism by which p53 regulates lifespan remains to be determined, […] findings highlight the possibility that careful manipulation of p53 activity during adult life may result in beneficial effects on healthy lifespan. Other tumor suppressor genes are also involved in regulation of longevity. […] In humans, Dumont et al. (2003) demonstrated that a replacement of arginine (Arg) by proline (Pro) at position 72 of human p53 decreases its ability to initiate apoptosis, suggesting that these variants may differently affect longevity and vulnerability to cancer. Van Heemst et al. (2005) showed that individuals with the Pro/Pro genotype of p53 corresponding to reduced apoptosis in cells had significantly increased overall survival (by 41%) despite a more than twofold increased proportion of cancer deaths at ages 85+, together with a decreased proportion of deaths from senescence related causes such as COPD, fractures, renal failure, dementia, and senility. It was suggested that human p53 may protect against cancer but at a cost of longevity. […] Other biological factors may also play opposing roles in cancer and aging and thus contribute to respective trade-offs […]. E.g., higher levels of IGF-1 [have been] linked to both cancer and attenuation of phenotypes of physical senescence, such as frailty, sarcopenia, muscle atrophy, and heart failure, as well as to better muscle regeneration”.

“The connection between cancer and longevity may potentially be mediated by trade-offs between cancer and other diseases which do not necessarily involve any basic mechanism of aging per se. In humans, it could result, for example, from trade-offs between vulnerabilities to cancer and AD, or to cancer and CVD […] There may be several biological mechanisms underlying the negative correlation among cancer and these diseases. One can be related to the differential role of apoptosis in their development. For instance, in stroke, the number of dying neurons following brain ischemia (and thus probability of paralysis or death) may be less in the case of a downregulated apoptosis. As for cancer, the downregulated apoptosis may, conversely, mean a higher risk of the disease because more cells may survive damage associated with malignant transformation. […] Also, the role of the apoptosis may be different or even opposite in the development of cancer and Alzheimer’s disease (AD). Indeed, suppressed apoptosis is a hallmark of cancer, while increased apoptosis is a typical feature of AD […]. If so, then chronically upregulated apoptosis (e.g., due to a genetic polymorphism) may potentially be protective against cancer, but be deleterious in relation to AD. […] Increased longevity can be associated not only with increased but also with decreased chances of cancer. […] The most popular to-date “anti-aging” intervention, caloric restriction, often results in increased maximal life span along with reduced tumor incidence in laboratory rodents […] Because the rate of apoptosis was significantly and consistently higher in food restricted mice regardless of age, James et al. (1998) suggested that caloric restriction may have a cancer-protective effect primarily due to the upregulated apoptosis in these mice.”

Below I’ll discuss content covered in chapter 5, which deals with ‘Factors That May Increase Vulnerability to Cancer and Longevity in Modern Human Populations’. I’ll start out with a few quotes:

“Currently, the overall cancer incidence rate (age-adjusted) in the less developed world is roughly half that seen in the more developed world […] For countries with similar levels of economic development but different climate and ethnic characteristics […], the cancer rate patterns look much more similar than for the countries that share the same geographic location, climate, and ethnic distribution, but differ in the level of economic development […]. This suggests that different countries may share common factors linked to economic prosperity that could be primarily responsible for the modern increases in overall cancer risk. […] Population aging (increases in the proportion of older people) may […] partly explain the rise in the global cancer burden […]; however, it cannot explain increases in age-specific cancer incidence rates over time […]. Improved diagnostics and elevated exposures to carcinogens may explain increases in rates for selected cancer sites, but they cannot fully explain the increase in the overall cancer risk, nor incidence rate trends for most individual cancers (Jemal et al. 2008, 2013).”

“[W]e propose that the association between the overall cancer risk and the economic progress and spread of the Western lifestyle could in part be explained by the higher proportion of individuals more susceptible to cancer in the populations of developed countries, and discuss several mechanisms of such an increase in the proportion of the vulnerable. […] mechanisms include but are not limited to: (i) Improved survival of frail individuals. […] (ii) Avoiding or reducing traditional exposures. Excessive disinfection and hygiene typical of the developed world can diminish exposure to some factors that were abundant in the past […] Insufficiently or improperly trained immune systems may be less capable of resisting cancer. (iii) Burden of novel exposures. Some new medicines, cleaning agents, foods, etc., that are not carcinogenic themselves may still affect the natural ways of processing carcinogens in the body, and through this increase a person’s susceptibility to established carcinogens. [If this one sounds implausible to you, I’ll remind you that drug metabolism is complicatedUS] […] (iv) Some of the factors linked to economic prosperity and the Western lifestyle (e.g., delayed childbirth and food enriched with growth factors) may antagonistically influence aging and cancer risk.”

They provide detailed coverage of all of these mechanisms in the chapter, below I have included a few select observations from that part of the coverage.

“There was a dramatic decline in infant and childhood mortality in developed countries during the last century. For example, the infant mortality rate in the United States was about 6 % of live births in 1935, 3 % in 1950, 1.3 % in 1980, and 0.6 % in 2010. That is, it declined tenfold over the course of 75 years […] Because almost all children (including those with immunity deficiencies) survive, the proportion of the children who are inherently more vulnerable could be higher in the more developed countries. This is consistent with a typically higher proportion of children with chronic inflammatory immune disorders such as asthma and allergy in the populations of developed countries compared to less developed ones […] Over-reduction of such traditional exposures may result in an insufficiently/improperly trained immune system early in life, which could make it less able to resist diseases, including cancer later in life […] There is accumulating evidence of the important role of these effects in cancer risk. […] A number of studies have connected excessive disinfection and lack of antigenic stimulation (especially in childhood) of the immune system in Westernized communities with increased risks of both chronic inflammatory diseases and cancer […] The IARC data on migrants to Israel […] allow for comparison of the age trajectories of cancer incidence rates between adult Jews who live in Israel but were born in other countries […] [These data] show that Jews born in less developed regions (Africa and Asia) have overall lower cancer risk than those born in the more developed regions (Europe and America).  The discrepancy is unlikely to be due to differences in cancer diagnostics because at the moment of diagnosis all these people were citizens of the same country with the same standard of medical care. These results suggest that surviving childhood and growing up in a less developed country with diverse environmental exposures might help form resistance to cancer that lasts even after moving to a high risk country.”

I won’t go much into the ‘burden of novel exposures’ part, but I should note that exposures that may be relevant include factors like paracetamol use and antibiotics for treatment of H. pylori. Paracetamol is not considered carcinogenic by the IARC, but we know from animal studies that if you give rats paratamol and then expose them to an established carcinogen (with the straightforward name N-nitrosoethyl-N-hydroxyethylamine), the number of rats developing kidney cancer goes up. In the context of H. pylori, we know that these things may cause stomach cancer, but when you treat rats with metronidazol (which is used to treat H. pylori) and expose them to an established carcinogen, they’re more likely to develop colon cancer. The link between colon cancer and antibiotics use has been noted in other contexts as well; decreased microbial diversity after antibiotics use may lead to suppression of the bifidobacteria and promotion of E. coli in the colon, the metabolic products of which may lead to increased cancer risk. Over time an increase in colon cancer risk and a decrease in stomach cancer risk has been observed in developed societies, but aside from changes in diet another factor which may play a role is population-wide exposure to antibiotics. Colon and stomach cancers are incidentally not the only ones of interest in this particular context; it has also been found that exposure to chloramphenicol, a broad-spectrum antibiotic used since the 40es, increases the risk of lymphoma in mice when the mice are exposed to a known carcinogen, despite the drug itself again not being clearly carcinogenic on its own.

Many new exposures aside from antibiotics are of course relevant. Two other drug-related ones that might be worth mentioning are hormone replacement therapy and contraceptives. HRT is not as commonly used today as it was in the past, but to give some idea of the scope here, half of all women in the US aged 50-65 are estimated to have been on HRT at the peak of its use, around the turn of the millennium, and HRT is assumed to be partly responsible for the higher incidence of hormone-related cancers observed in female populations living in developed countries. It’s of some note that the use of HRT dropped dramatically shortly after this peak (from 61 million prescriptions in 2001 to 21 million in 2004), and that the incidence of estrogen-receptor positive cancers subsequently dropped. As for oral contraceptives, these have been in use since the 1960s, and combined hormonal contraceptives are known to increase the risk of liver- and breast cancer, while seemingly also having a protective effect against endometrial cancer and ovarian cancer. The authors speculate that some of the cancer incidence changes observed in the US during the latter half of the last century, with a decline in female endometrial and ovarian cancer combined with an increase in breast- and liver cancer, could in part be related to widespread use of these drugs. An estimated 10% of all women of reproductive age alive in the world, and 16% of those living in the US, are estimated to be using combined hormonal contraceptives. In the context of the protective effect of the drugs, it should perhaps be noted that endometrial cancer in particular is strongly linked to obesity so if you are not overweight you are relatively low-risk.

Many ‘exposures’ in a cancer context are not drug-related. For example women in Western societies tend to go into menopause at a higher age, and higher age of menopause has been associated with hormone-related cancers; but again the picture is not clear in terms of how the variable affects longevity, considering that later menopause has also been linked to increased longevity in several large studies. In the studies the women did have higher mortality from the hormone-related cancers, but on the other hand they were less likely to die from some of the other causes, such as pneumonia, influenza, and falls. Age of childbirth is also a variable where there are significant differences between developed countries and developing countries, and this variable may also be relevant to cancer incidence as it has been linked to breast cancer and melanoma; in one study women who first gave birth after the age of 35 had a 40% increased risk of breast cancer compared to mothers who gave birth before the age of 20 (good luck ‘controlling for everything’ in a context like that, but…), and in a meta-analysis the relative risk for melanoma was 1.47 for women in the oldest age group having given birth, compared to the youngest (again, good luck controlling for everything, but at least it’s not just one study). Lest you think this literature only deals with women, it’s also been found that parental age seems to be linked to cancers in the offspring (higher parental age -> higher cancer risk in the offspring), though the effect sizes are not mentioned in the coverage.

Here’s what they conclude at the end of the chapter:

“Some of the factors associated with economic prosperity and a Western lifestyle may influence both aging and vulnerability to cancer, sometimes oppositely. Current evidence supports a possibility of trade-offs between cancer and aging-related phenotypes […], which could be influenced by delayed reproduction and exposures to growth factors […]. The latter may be particularly beneficial at very old age. This is because the higher levels of growth factors may attenuate some phenotypes of physical senescence, such as decline in regenerative and healing ability, sarcopenia, frailty, elderly fractures and heart failure due to muscles athrophy. They may also increase the body’s vulnerability to cancer, e.g., through growth promoting and anti-apoptotic effects […]. The increase in vulnerability to cancer due to growth factors can be compatible with extreme longevity because cancer is a major contributor to mortality mainly before age 85, while senescence-related causes (such as physical frailty) become major contributors to mortality at oldest old ages (85+). In this situation, the impact of growth factors on vulnerability to death could be more deleterious in middle-to-old life (~before 85) and more beneficial at older ages (85+).

The complex relationships between aging, cancer, and longevity are challenging. This complexity warns against simplified approaches to extending longevity without taking into account the possible trade-offs between phenotypes of physical aging and various health disorders, as well as the differential impacts of such tradeoffs on mortality risks at different ages (e.g., Ukraintseva and Yashin 2003a; Yashin et al. 2009; Ukraintseva et al. 2010, 2016).”

March 7, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Economic Analysis in Healthcare (II)

This is my second and last post about the book, which will include some quotes from the second half of the book, as well as some comments.

“Different countries have adopted very different health care financing systems. In fact, it is arguable that the arrangements for financing of health care are more variable between different countries than the financing of any other good or service. […] The mechanisms adopted to deal with moral hazard are similar in all systems, whilst the mechanisms adopted to deal with adverse selection and incomplete coverage are very different. Compulsory insurance is used by social insurance and taxation [schemes] to combat adverse selection and incomplete coverage. Private insurance relies instead on experience rating to address adverse selection and a mix of retrospective reimbursement and selective contracting and vertical integration to deal with incomplete coverage.”

I have mentioned this before here on the blog (and elsewhere), but it is worth reiterating because you seem to sometimes encounter people who do not know this; there are some problems you’ll have to face when you’re dealing with insurance markets which will be there regardless of which entity is in charge of the insurance scheme. It doesn’t matter if your insurance system is government based or if the government is not involved in the insurance scheme at all, moral hazard will be there either way as a potential problem and you’re going to have to deal with that somehow. In econ 101 you tend to learn that ‘markets are great’, but this is one of those problems which are not going to go away by privatization.

On top of common problems faced by all insurers/insurance systems, different types of -systems will also tend to face a different mix of potential problems, some of which are likely to merit special attention in the specific setting in question. Some problems tend to be much more common in some specific settings than they are in others, which means that to some extent when you’re deciding on what might be ‘the ‘best’ institutional setup’, part of what you’re deciding on is which problem you are most concerned about addressing. In an evaluation context it should be pointed out in that context that the fact that most systems are mixes of different systems rather than ‘pure systems’, which they are, means that evaluation problems tend to be harder than they might otherwise have been. To add to this complexity as noted above the ways insurers deal with the same problem may not necessarily be the same in different institutional setups, which is worth having in mind when performance is evaluated (i.e., the fact that country A has included in the insurance system a feature X intending to address problem Q does not mean that country B, which has not included X in the system, does not attempt to address problem Q; B may just be using feature Y instead of feature X to do so).

Chapter 7 of the book deals with Equity in health care, and although I don’t want to cover that chapter in any detail a few observations from the text I did find worth including in this post:

“In the 1930s, only 43% of the [UK] population were covered by the national insurance scheme, mainly men in manual and low-paid occupations, and covered only for GP services. Around 21 million people were not covered by any health insurance, and faced potentially catastrophic expenditure should they become ill.”

“The literature on equity in the finance of health care has focused largely on the extent to which health care is financed according to ability to pay, and in particular on whether people with different levels of income make […] different payments, which is a vertical equity concern. Much less attention has been paid to horizontal equity, which considers the extent to which people with the same income make the same payments. […] There is horizontal inequity if people with the same ability to pay for health care, for example the same income, pay different amounts for it. […] tax-based payments and social health insurance payments tend to have less horizontal inequity than private health insurance payments and direct out-of-pocket payments. […] there are many concepts of equity that could be pursued; these are limited only by our capacity to think about the different ways in which resources could be allocated. It is unsurprising therefore that so many concepts of equity are discussed in the literature.”

Chapter 8 is about ‘Health care labour markets’. Again I won’t cover the chapter in much detail – people interested in such topics might like to have a look at this paper, which I concluded from a brief skim looks like it covers a few of the topics also discussed in the chapter – but I did want to include a few data:

“[S]alaries and wages paid to health care workers account for a substantial component of total health expenditure: the average country devotes over 40% of its government-funded health expenditure to paying its health workforce […], though there are regional variations [from ~30% in Africa to ~50% in the US and the Middle East – the data source is WHO, and the numbers are from 2006]. […] The WHO estimates there are around 59 million paid health workers worldwide […], around nine workers for every 1 000 population, with around two-thirds of the total providing health care and one third working in a non-clinical capacity.”

The last few chapters of the book cover mostly topics I have dealt with before, in more detail – for example are most topics covered here which are also covered in Gray et al. covered in much more detail in the latter book, which is natural as this text is mostly an introductory undergraduate text whereas the Gray et al. text is not (the latter book was based on material taught in a course called ‘Advanced Methods of Cost-Effectiveness Analysis’) – or topics in which I’m not actually all that interested (e.g. things like ‘extra-welfarism‘). Below I have added some quotes from the remaining chapters. I apologize in advance for repeating myself, given the fact that I probably covered a lot of this stuff back when I covered Gray et al., but on the other hand I read that book a while ago anyway:

“Simply providing information on costs and benefits is in itself not evaluative. Rather, in economic evaluation this information is structured in such a way as to enable alternative uses of resources to be judged. There are many criteria that might be used for such judgements. […] The criteria that are the focus of economic analysis are efficiency and equity […] in practice efficiency is dealt with far more often and with greater attention to precise numerical estimates. […] In publicly provided health programmes, market forces might be weak or there might be none at all. Economic evaluation is largely concerned with measuring efficiency in areas where there is public involvement and there are no markets to generate the kind of information – for example, prices and profits – that enable us to judge this. […] The question of how costs and benefits are to be measured and weighed against each other is obviously a fundamental issue, and indeed forms the main body of work on the topic. The answers to this question are often pragmatic, but they also have very strong guides from theory.”

“[M]any support economic evaluation as a useful technique even where it falls short of being a full cost–benefit analysis [‘CBA’ – US], as it provides at least some useful information. A partial cost–benefit analysis usually means that some aspects of cost or benefit have been identified but not valued, and the usefulness of the information depends on whether we believe that if the missing elements were to be valued they would alter the balance of costs and benefits. […] A special case of a partial economic evaluation is where costs are valued but benefits are not. […] This kind of partial efficiency is dealt with by a different type of economic evaluation known as cost-effectiveness analysis (CEA). […] One rationale for CEA is that whilst costs are usually measured in terms of money, it may be much more difficult to measure benefits that way. […] Cost-effectiveness analysis tries to identify where more benefit can be produced at the same cost or a lower cost can be achieved for the same benefit. […] there are many cases where we may wish to compare alternatives in which neither benefits nor costs are held constant. In this case, a cost-effectiveness ratio (CER) – the cost per unit of output or effect – is calculated to compare the alternatives, with the implication that the lower the CER the better. […] CBA seeks to answer whether or not a particular output is worth the cost. CEA seeks to answer the question of which among two or more alternatives provides the most output for a given cost, or the lowest cost for a given output. CBA therefore asks whether or not we should do things, while CEA asks what is the best way to do things that are worth doing.”

“The major preoccupation of economic evaluation in health care has been measurement of costs and benefits – what should be measured and how it should be measured – rather than the aims of the analysis. […] techniques such as CBA and CEA are […] defined by measurement rather than economic theory. […] much of the economic evaluation literature gives the label cost-minimisation analysis to what was traditionally called CEA, and specifically restricts the term CEA to choices between alternatives that have similar types of effects but differing levels of effect and costs. […] It can be difficult to specify what the appropriate measure of effect is in CEA. […] care is […] required to ensure that whichever measure of effect is chosen does not mislead or bias the analysis – for example, if one intervention is better at preventing non-fatal heart attacks but is worse at preventing fatal attacks, the choice of effect measure will be crucial.”

“[Health] indicators are usually measures of the value of health, although not usually expressed in money terms. As a result, a third important type of economic evaluation has arisen, called cost–utility analysis (CUA). […] the health measure usually used in CUA is gains in quality-adjusted life years […] it is essentially a composite measure of gains in life expectancy and health-related quality of life. […] the most commonly used practice in CUA is to use the QALY and moreover to assume that each QALY is worth the same irrespective of who gains it and by what route. […] Similarly, CBA in practice focuses on sums of benefits compared to sums of costs, not on the distribution of these between people with different characteristics. It also does not usually take account of whether society places different weights on benefits experienced by different people; for example, there is evidence that many people would prefer health services to put a higher priority on improving the health of younger rather than older people (Tsuchiya et al., 2003).”

“Because CEA does not give a direct comparison between the value of effects and costs, decision rules are far more complex than for CBA and are bounded by restrictions on their applicability. The problem arises when the alternatives being appraised do not have equal costs or benefits, but instead there is a trade-off: the greater benefit that one of the alternatives has is achieved at a higher cost [this is not a rare occurrence, to put it mildly…]. The key problem is how that trade-off is to be represented, and how it can then be interpreted; essentially, encapsulating cost-effectiveness in a single index that can unambiguously be interpreted for decision-making purposes.”

“Although cost-effectiveness analysis can be very useful, its essential inability to help in the kind of choices that cost–benefit analysis allows – an absolute recommendation for a particular activity rather than one contingent on a comparison with alternatives – has proved such a strong limitation that means have been sought to overcome it. The key to this has been the cost-effectiveness threshold or ceiling ratio, which is essentially a level of the CER that any intervention must meet if it is to be regarded as cost-effective. It can also be interpreted as the decision maker’s willingness to pay for a unit of effectiveness. […] One of the problems with this kind of approach is that it is no longer consistent with the conventional aim of CEA. Except under special conditions, it is not consistent with output maximisation constrained by a budget. […] It is useful to distinguish between a comparator that is essentially ‘do nothing about the problem […]’ and one that is ‘another way of doing something about that problem’. The CER that arises from the second of these is […] an incremental cost-effectiveness ratio (ICER) […] in most cases the ICER is the correct measure to use. […] A problem [with using ICERs] is that if only the ICER is evaluated, it must be assumed that the alternative used in the comparator is itself cost-effective; if it is not, the ICER may mislead.”

“The basis of economic costing is […] quite distinct from accounting or financial cost approaches. The process of costing involves three steps: (1) identify and describe the changes in resource use, both increases and decreases, that are associated with the options to be evaluated; (2) quantify those changes in resource use in physical units; and (3) value those resources. […] many markets are not fully competitive. For example, the wages paid to doctors may be a reflection of the lobbying power of medical associations or restrictions to licensing, rather than the value of their skills […] The prices of drugs may reflect the effect of government regulations on licensing, pricing and intellectual property. Deviations of price from opportunity cost may arise from factors such as imperfect competition […] or from distortions to markets created by government interventions. Where these are known, prices should be adjusted […] In practice, such adjustments are difficult to make and would rely on good information on the underlying costs of production, which is often not available. Further, where the perspective is that of the health service, there is an argument for not adjusting prices, on the grounds that the prevailing prices, even if inefficient, are those they must pay and are relevant to their budget. […] Where prices are used, it is important to consider whether the option being evaluated will, if implemented, result in price changes. […] Valuing resource use becomes still more difficult in cases where there are no markets. This includes the value of patients’ time in seeking and receiving care or of caregivers’ time in providing informal supportive care. The latter can be an important element of costs and […] may be particularly important in the evaluation of health care options that rely on such inputs.”

“[A]lthough the emphasis in economic evaluation is on marginal changes in costs and benefits, the available data frequently relate to average costs […] There are two issues with using average cost data. First, the addition to or reduction in costs from increased or decreased resource use may be higher, lower or the same as the average cost. Unfortunately, knowing what the relationship is between average and marginal cost requires information on the latter – the absence of which is the reason average costs are used! Secondly, average cost data obscure potentially important issues with respect to the technical efficiency of providers. If average costs are derived in one setting, for example a hospital, this assumes that the hospital is using the optimal combination of inputs. If average costs are derived from multiple settings, they will include a variety of underlying production technologies and a variety of underlying levels of production efficiency. Average costs are therefore less than ideal, because they comprise a ‘black box’ of underlying cost and production decisions. […] Approaches to costing fall into two broad types: macro- or ‘top-down’ costing, and micro- or ‘bottom-up’ costing […] distinguished largely on the basis of the level of disaggregation […] A top-down approach may involve using pre-existing data on total or average costs and apportioning these in some way to the options being evaluated. […] In contrast, a bottom-up approach identifies, quantifies and values resources in a disaggregated way, so that each element of costs is estimated individually and they are summed up at the end. […] The separation of top-down and bottom-up costing approaches is not always clear. For example, often top-down studies are used to calculate unit costs, which are then combined with resource use data in bottom-up studies.”

“Health care programmes can affect both length and quality of life; these in turn interact with both current and future health care use, relating both to the condition of interest and to other conditions. Weinstein and Stason (1977) argue that the cost of ‘saving’ life in one way should include the future costs to the health service of death from other causes. […] In practice, different analysts respond to this issue in different ways: examples may be found of economic evaluations of mammography screening that do […] and do not […] incorporate future health care costs. Methodological differences of this sort reduce the ability to make valid comparisons between results. In practical terms, this issue is a matter of researcher discretion”.

The stuff included in the last paragraph above is closely linked to stuff covered in the biodemography text I’m currently reading, and I expect to cover related topics in some detail in the future here on the blog. Below a few final observations from the book about discounting:

“It is generally accepted that future costs should be discounted in an economic evaluation and, in CBA, it is also relatively non-controversial that benefits, in monetary terms, should also be discounted. In contrast, there is considerable debate surrounding the issue of whether to discount health outcomes such as QALYs, and what the appropriate discount rate is. […] The debate […] concentrates on the issue of whether people have a time preference for receiving health benefits now rather than in the future in the same way that they might have a time preference for gaining monetary benefits now rather than later in life. Arguments both for and against this view are plausible, and the issue is currently unresolved. […] The effect of not discounting health benefits is to improve the cost-effectiveness of all health care programmes that have benefits beyond the current time period, because not discounting increases the magnitude of the health benefits. But as well as affecting the apparent cost-effectiveness of programmes relative to some benchmark or threshold, the choice of whether to discount will also affect the cost-effectiveness of different health care programmes relative to each other […] Discounting health benefits tends to make those health care programmes with benefits realised mostly in the future, such as prevention, less cost-effective relative to those with benefits realised mostly in the present, such as cure.”

March 5, 2017 Posted by | Books, Economics, health care | Leave a comment

Diabetes and the brain (IV)

Here’s one of my previous posts in the series about the book. In this post I’ll cover material dealing with two acute hyperglycemia-related diabetic complications (DKA and HHS – see below…) as well as multiple topics related to diabetes and stroke. I’ll start out with a few quotes from the book about DKA and HHS:

“DKA [diabetic ketoacidosis] is defined by a triad of hyperglycemia, ketosis, and acidemia and occurs in the absolute or near-absolute absence of insulin. […] DKA accounts for the bulk of morbidity and mortality in children with T1DM. National population-based studies estimate DKA mortality at 0.15% in the United States (4), 0.18–0.25% in Canada (4, 5), and 0.31% in the United Kingdom (6). […] Rates reach 25–67% in those who are newly diagnosed (4, 8, 9). The rates are higher in younger children […] The risk of DKA among patients with pre-existing diabetes is 1–10% annual per person […] DKA can present with mild-to-severe symptoms. […] polyuria and polydipsia […] patients may present with signs of dehydration, such as tachycardia and dry mucus membranes. […] Vomiting, abdominal pain, malaise, and weight loss are common presenting symptoms […] Signs related to the ketoacidotic state include hyperventilation with deep breathing (Kussmaul’s respiration) which is a compensatory respiratory response to an underlying metabolic acidosis. Acetonemia may cause a fruity odor to the breath. […] Elevated glucose levels are almost always present; however, euglycemic DKA has been described (19). Anion-gap metabolic acidosis is the hallmark of this condition and is caused by elevated ketone bodies.”

“Clinically significant cerebral edema occurs in approximately 1% of patients with diabetic ketoacidosis […] DKA-related cerebral edema may represent a continuum. Mild forms resulting in subtle edema may result in modest mental status abnormalities whereas the most severe manifestations result in overt cerebral injury. […] Cerebral edema typically presents 4–12 h after the treatment for DKA is started (28, 29), but can occur at any time. […] Increased intracranial pressure with cerebral edema has been recognized as the leading cause of morbidity and mortality in pediatric patients with DKA (59). Mortality from DKA-related cerebral edema in children is high, up to 90% […] and accounts for 60–90% of the mortality seen in DKA […] many patients are left with major neurological deficits (28, 31, 35).”

“The hyperosmolar hyperglycemic state (HHS) is also an acute complication that may occur in patients with diabetes mellitus. It is seen primarily in patients with T2DM and has previously been referred to as “hyperglycemic hyperosmolar non-ketotic coma” or “hyperglycemic hyperosmolar non-ketotic state” (13). HHS is marked by profound dehydration and hyperglycemia and often by some degree of neurological impairment. The term hyperglycemic hyperosmolar state is used because (1) ketosis may be present and (2) there may be varying degrees of altered sensorium besides coma (13). Like DKA, the basic underlying disorder is inadequate circulating insulin, but there is often enough insulin to inhibit free fatty acid mobilization and ketoacidosis. […] Up to 20% of patients diagnosed with HHS do not have a previous history of diabetes mellitus (14). […] Kitabchi et al. estimated the rate of hospital admissions due to HHS to be lower than DKA, accounting for less than 1% of all primary diabetic admissions (13). […] Glucose levels rise in the setting of relative insulin deficiency. The low levels of circulating insulin prevent lipolysis, ketogenesis, and ketoacidosis (62) but are unable to suppress hyperglycemia, glucosuria, and water losses. […] HHS typically presents with one or more precipitating factors, similar to DKA. […] Acute infections […] account for approximately 32–50% of precipitating causes (13). […] The mortality rates for HHS vary between 10 and 20% (14, 93).”

It should perhaps be noted explicitly that the mortality rates for these complications are particularly high in the settings of either very young individuals (DKA) or in elderly individuals (HHS) who might have multiple comorbidities. Relatedly HHS often develops acutely specifically in settings where the precipitating factor is something really unpleasant like pneumonia or a cardiovascular event, so a high-ish mortality rate is perhaps not that surprising. Nor is it surprising that very young brains are particularly vulnerable in the context of DKA (I already discussed some of the research on these matters in some detail in an earlier post about this book).

This post to some extent covered the topic of ‘stroke in general’, however I wanted to include here also some more data specifically on diabetes-related matters about this topic. Here’s a quote to start off with:

“DM [Diabetes Mellitus] has been consistently shown to represent a strong independent risk factor of ischemic stroke. […] The contribution of hyperglycemia to increased stroke risk is not proven. […] the relationship between hyperglycemia and stroke remains subject of debate. In this respect, the association between hyperglycemia and cerebrovascular disease is established less strongly than the association between hyperglycemia and coronary heart disease. […] The course of stroke in patients with DM is characterized by higher mortality, more severe disability, and higher recurrence rate […] It is now well accepted that the risk of stroke in individuals with DM is equal to that of individuals with a history of myocardial infarction or stroke, but no DM (24–26). This was confirmed in a recently published large retrospective study which enrolled all inhabitants of Denmark (more than 3 million people out of whom 71,802 patients with DM) and were followed-up for 5 years. In men without DM the incidence of stroke was 2.5 in those without and 7.8% in those with prior myocardial infarction, whereas in patients with DM it was 9.6 in those without and 27.4% in those with history of myocardial infarction. In women the numbers were 2.5, 9.0, 10.0, and 14.2%, respectively (22).

That study incidentally is very nice for me in particular to know about, given that I am a Danish diabetic. I do not here face any of the usual tiresome questions about ‘external validity’ and issues pertaining to ‘extrapolating out of sample’ – not only is it quite likely I’ve actually looked at some of the data used in that analysis myself, I also know that I am almost certainly one of the people included in the analysis. Of course you need other data as well to assess risk (e.g. age, see the previously linked post), but this is pretty clean as far as it goes. Moving on…

“The number of deaths from stroke attributable to DM is highest in low-and-middle-income countries […] the relative risk conveyed by DM is greater in younger subjects […] It is not well known whether type 1 or type 2 DM affects stroke risk differently. […] In the large cohort of women enrolled in the Nurses’ Health Study (116,316 women followed for up to 26 years) it was shown that the incidence of total stroke was fourfold higher in women with type 1 DM and twofold higher among women with type 2 DM than for non-diabetic women (33). […] The impact of DM duration as a stroke risk factor has not been clearly defined. […] In this context it is important to note that the actual duration of type 2 DM is difficult to determine precisely […and more generally: “the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality“, as Yashin et al. put it – I also talked about this topic in my previous post, but it’s important when you’re looking at these sorts of things and is worth reiterating – US]. […] Traditional risk factors for stroke such as arterial hypertension, dyslipidemia, atrial fibrillation, heart failure, and previous myocardial infarction are more common in people with DM […]. However, the impact of DM on stroke is not just due to the higher prevalence of these risk factors, as the risk of mortality and morbidity remains over twofold increased after correcting for these factors (4, 37). […] It is informative to distinguish between factors that are non-specific and specific to DM. DM-specific factors, including chronic hyperglycemia, DM duration, DM type and complications, and insulin resistance, may contribute to an elevated stroke risk either by amplification of the harmful effect of other “classical” non-specific risk factors, such as hypertension, or by acting independently.”

More than a few variables are known to impact stroke risk, but the fact that many of the risk factors are related to each other (‘fat people often also have high blood pressure’) makes it hard to figure out which variables are most important, how they interact with each other, etc., etc. One might in that context perhaps conceptualize the metabolic syndrome (-MS) as a sort of indicator variable indicating whether a relatively common set of such related potential risk factors of interest are present or not – it is worth noting in that context that the authors include in the text the observation that: “it is yet uncertain if the whole concept of the MS entails more than its individual components. The clustering of risk factors complicates the assessment of the contribution of individual components to the risk of vascular events, as well as assessment of synergistic or interacting effects.” MS confers a two-threefold increased stroke risk, depending on the definition and the population analyzed, so there’s definitely some relevant stuff included in that box, but in the context of developing new treatment options and better assess risk it might be helpful to – to put it simplistically – know if variable X is significantly more important than variable Y (and how the variables interact, etc., etc.). But this sort of information is hard to get.

There’s more than one type of stroke, and the way diabetes modifies the risk of various stroke types is not completely clear:

“Most studies have consistently shown that DM is an important risk factor for ischemic stroke, while the incidence of hemorrhagic stroke in subjects with DM does not seem to be increased. Consequently, the ratio of ischemic to hemorrhagic stroke is higher in patients with DM than in those stroke patients without DM [recall the base rates I’ve mentioned before in the coverage of this book: 80% of strokes are ischemic strokes in Western countries, and 15 % hemorrhagic] […] The data regarding an association between DM and the risk of hemorrhagic stroke are quite conflicting. In the most series no increased risk of cerebral hemorrhage was found (10, 101), and in the Copenhagen Stroke Registry, hemorrhagic stroke was even six times less frequent in diabetic patients than in non-diabetic subjects (102). […] However, in another prospective population-based study DM was associated with an increased risk of primary intracerebral hemorrhage (103). […] The significance of DM as a risk factor of hemorrhagic stroke could differ depending on ethnicity of subjects or type of DM. In the large Nurses’ Health Study type 1 DM increased the risk of hemorrhagic stroke by 3.8 times while type 2 DM did not increase such a risk (96). […] It is yet unclear if DM predominantly predisposes to either large or small vessel ischemic stroke. Nevertheless, lacunar stroke (small, less than 15mm in diameter infarction, cyst-like, frequently multiple) is considered to be the typical type of stroke in diabetic subjects (105–107), and DM may be present in up to 28–43% of patients with cerebral lacunar infarction (108–110).”

The Danish results mentioned above might not be as useful to me as they were before if the type is important, because the majority of those diabetics included were type 2 diabetics. I know from personal experience that it is difficult to type-identify diabetics using the Danish registry data available if you want to work with population-level data, and any type of scheme attempting this will be subject to potentially large misidentification problems. Some subgroups can be presumably correctly identified using diagnostic codes, but a very large number of individuals will be left out of the analyses if you only rely on identification strategies where you’re (at least reasonably?) certain about the type. I’ve worked on these identification problems during my graduate work so perhaps a few more things are worth mentioning here. In the context of diabetic subgroup analyses, misidentification is in general a much larger problem in the context of type 1 results than in the context of type 2 results; unless the study design takes the large prevalence difference of the two conditions into account, the type 1 sample will be much smaller than the type 2 sample in pretty much all analytical contexts, so a small number of misidentified type 2 individuals can have large impacts on the results of the type 1 sample. Type 1s misidentified as type 2 individuals is in general to be expected to be a much smaller problem in terms of the validity of the type 2 analysis; misidentification of that type will cause a loss of power in the context of the type 1 subgroup analysis, which is already low to start with (and it’ll also make the type 1 subgroup analysis even more vulnerable to misidentified type 2s), but it won’t much change the results of the type 2 subgroup analysis in any significant way. Relatedly, even if enough type 2 patients are misidentified to cause problems with the interpretation of the type 1 subgroup analysis, this would not on its own be a good reason to doubt the results of the type 2 subgroup analysis. Another thing to note in terms of these things is that given that misidentification will tend to lead to ‘mixing’, i.e. it’ll make the subgroup results look similar, when outcomes are not similar in the type 1 and the type 2 individuals then this might be taken to be an indicator that something potentially interesting might be going on, because most analyses will struggle with some level of misidentification which will tend to reduce the power of tests of group differences.

What about stroke outcomes? A few observations were included on that topic above, but the book has a lot more stuff on that – some observations on this topic:

“DM is an independent risk factor of death from stroke […]. Tuomilehto et al. (35) calculated that 16% of all stroke mortality in men and 33% in women could be directly attributed to DM. Patients with DM have higher hospital and long-term stroke mortality, more pronounced residual neurological deficits, and more severe disability after acute cerebrovascular accidents […]. The 1-year mortality rate, for example, was twofold higher in diabetic patients compared to non-diabetic subjects (50% vs. 25%) […]. Only 20% of people with DM survive over 5 years after the first stroke and half of these patients die within the first year (36, 128). […] The mechanisms underlying the worse outcome of stroke in diabetic subjects are not fully understood. […] Regarding prevention of stroke in patients with DM, it may be less relevant than in non-DM subjects to distinguish between primary and secondary prevention as all patients with DM are considered to be high-risk subjects regardless of the history of cerebrovascular accidents or the presence of clinical and subclinical vascular lesions. […] The influence of the mode of antihyperglycemic treatment on the risk of stroke is uncertain.

Control of blood pressure is very important in the diabetic setting:

“There are no doubts that there is a linear relation between elevated systolic blood pressure and the risk of stroke, both in people with or without DM. […] Although DM and arterial hypertension represent significant independent risk factors for stroke if they co-occur in the same patient the risk increases dramatically. A prospective study of almost 50 thousand subjects in Finland followed up for 19 years revealed that the hazard ratio for stroke incidence was 1.4, 2.0, 2.5, 3.5, and 4.5 and for stroke mortality was 1.5, 2.6, 3.1, 5.6, and 9.3, respectively, in subjects with an isolated modestly elevated blood pressure (systolic 140–159/diastolic 90–94 mmHg), isolated more severe hypertension (systolic >159 mmHg, diastolic >94 mmHg, or use of antihypertensive drugs), with isolated DM only, with both DM and modestly elevated blood pressure, and with both DM and more severe hypertension, relative to subjects without either of the risk factors (168). […] it remains unclear whether some classes of antihypertensive agents provide a stronger protection against stroke in diabetic patients than others. […] effective antihypertensive treatment is highly beneficial for reduction of stroke risk in diabetic patients, but the advantages of any particular class of antihypertensive medications are not substantially proven.”

Treatment of dyslipidemia is also very important, but here it does seem to matter how you treat it:

“It seems that the beneficial effect of statins is dose-dependent. The lower the LDL level that is achieved the stronger the cardiovascular protection. […] Recently, the results of the meta-analysis of 14 randomized trials of statins in 18,686 patients with DM had been published. It was calculated that statins use in diabetic patients can result in a 21% reduction of the risk of any stroke per 1 mmol/l reduction of LDL achieved […] There is no evidence from trials that supports efficacy of fibrates for stroke prevention in diabetic patients. […] No reduction of stroke risk by fibrates was shown also in a meta-analysis of eight trials enrolled 12,249 patients with type 2 DM (204).”

Antiplatelets?

“Significant reductions in stroke risk in diabetic patients receiving antiplatelet therapy were found in large-scale controlled trials (205). It appears that based on the high incidence of stroke and prevalence of stroke risk factors in the diabetic population the benefits of routine aspirin use for primary and secondary stroke prevention outweigh its potential risk of hemorrhagic stroke especially in patients older than 30 years having at least one additional risk factor (206). […] both guidelines issued by the AHA/ADA or the ESC/EASD on the prevention of cardiovascular disease in patients with DM support the use of aspirin in a dose of 50–325 mg daily for the primary prevention of stroke in subjects older than 40 years of age and additional risk factors, such as DM […] The newer antiplatelet agent, clopidogrel, was more efficacious in prevention of ischemic stroke than aspirin with greater risk reduction in the diabetic cohort especially in those treated with insulin compared to non-diabetics in CAPRIE trial (209). However, the combination of aspirin and clopidogrel does not appear to be more efficacious and safe compared to clopidogrel or aspirin alone”.

When you treat all risk factors aggressively, it turns out that the elevated stroke risk can be substantially reduced. Again the data on this stuff is from Denmark:

“Gaede et al. (216) have shown in the Steno 2 study that intensive multifactorial intervention aimed at correction of hyperglycemia, hypertension, dyslipidemia, and microalbuminuria along with aspirin use resulted in a reduction of cardiovascular morbidity including non-fatal stroke […] recently the results of the extended 13.3 years follow-up of this study were presented and the reduction of cardiovascular mortality by 57% and morbidity by 59% along with the reduction of the number of non-fatal stroke (6 vs. 30 events) in intensively treated group was convincingly demonstrated (217). Antihypertensive, hypolipidemic treatment, use of aspirin should thus be recommended as either primary or secondary prevention of stroke for patients with DM.”

March 3, 2017 Posted by | Books, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology, Statistics | Leave a comment

Biodemography of aging (I)

“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”

The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:

The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.

“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”

…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.

I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest –  ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)

Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.

“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”

I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:

“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”

The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.

“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”

They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”

I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).

The next chapter is one of the chapters in the book dealing with the SEER data I also mentioned not long ago in the context of my coverage of Bueno et al. Some sample quotes from that chapter below:

“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”

“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”

In short, these data sets are very large, and contain a lot of information. Here are some results/data:

“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”

“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”

“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinsons disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.

“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”

“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”

“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”

March 1, 2017 Posted by | Biology, Books, Cancer/oncology, Cardiology, Demographics, Diabetes, Epidemiology, Genetics, Medicine, Nephrology, Neurology | Leave a comment

The Ageing Immune System and Health (II)

Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.

Lungs:

“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema [4]. Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age [6] increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance [9] as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”

“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals [6].”

Cancer:

“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEERincidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US] [6] show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients [7] and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.

Neurodegenerative diseases:

“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”

“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms [10]. These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc [10]. These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.

“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”

Frailty:

“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery [15], hip fractures [16] and oncology [17] have now begun to recognise frailty as an important predictor of mortality and morbidity.

I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:

“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up [70]. […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”

Exercise:

“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation [5]. […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer [7]. Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts [8]. […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”

February 24, 2017 Posted by | Biology, Books, Cancer/oncology, Epidemiology, Immunology, Medicine, Neurology | Leave a comment

Economic Analysis in Healthcare (I)

“This book is written to provide […] a useful balance of theoretical treatment, description of empirical analyses and breadth of content for use in undergraduate modules in health economics for economics students, and for students taking a health economics module as part of their postgraduate training. Although we are writing from a UK perspective, we have attempted to make the book as relevant internationally as possible by drawing on examples, case studies and boxed highlights, not just from the UK, but from a wide range of countries”

I’m currently reading this book. The coverage has been somewhat disappointing because it’s mostly an undergraduate text which has so far mainly been covering concepts and ideas I’m already familiar with, but it’s not terrible – just okay-ish. I have added some observations from the first half of the book below.

“Health economics is the application of economic theory, models and empirical techniques to the analysis of decision making by people, health care providers and governments with respect to health and health care. […] Health economics has evolved into a highly specialised field, drawing on related disciplines including epidemiology, statistics, psychology, sociology, operations research and mathematics […] health economics is not shorthand for health care economics. […] Health economics studies not only the provision of health care, but also how this impacts on patients’ health. Other means by which health can be improved are also of interest, as are the determinants of ill-health. Health economics studies not only how health care affects population health, but also the effects of education, housing, unemployment and lifestyles.”

“Economic analyses have been used to explain the rise in obesity. […] The studies show that reasons for the rise in obesity include: *Technological innovation in food production and transportation that has reduced the cost of food preparation […] *Agricultural innovation and falling food prices that has led to an expansion in food supply […] *A decline in physical activity, both at home and at work […] *An increase in the number of fast-food outlets, resulting in changes to the relative prices of meals […]. *A reduction in the prevalence of smoking, which leads to increases in weight (Chou et al., 2004).”

“[T]he evidence is that ageing is in reality a relatively small factor in rising health care costs. The popular view is known as the ‘expansion of morbidity’ hypothesis. Gruenberg (1977) suggested that the decline in mortality that has led to an increase in the number of older people is because fewer people die from illnesses that they have, rather than because disease incidence and prevalence are lower. Lower mortality is therefore accompanied by greater morbidity and disability. However, Fries (1980) suggested an alternative hypothesis, ‘compression of morbidity’. Lower mortality rates are due to better health amongst the population, so people not only live longer, they are in better health when old. […] Zweifel et al. (1999) examined the hypothesis that the main determinant of high health care costs amongst older people is not the time since they were born, but the time until they die. Their results, confirmed by many subsequent studies, is that proximity to death does indeed explain higher health care costs better than age per se. Seshamani and Gray (2004) estimated that in the UK this is a factor up to 15 years before death, and annual costs increase tenfold during the last 5 years of life. The consensus is that ageing per se contributes little to the continuing rise in health expenditures that all countries face. Much more important drivers are improved quality of care, access to care, and more expensive new technology.”

“The difference between AC [average cost] and MC [marginal cost] is very important in applied health economics. Very often data are available on the average cost of health care services but not on their marginal cost. However, using average costs as if they were marginal costs may mislead. For example, hospital costs will be reduced by schemes that allow some patients to be treated in the community rather than being admitted. Given data on total costs of inpatient stays, it is possible to calculate an average cost per patient. It is tempting to conclude that avoiding an admission will reduce costs by that amount. However, the average includes patients with different levels of illness severity, and the more severe the illness the more costly they will be to treat. Less severely ill patients are most likely to be suitable for treatment in the community, so MC will be lower than AC. Such schemes will therefore produce a lower cost reduction than the estimate of AC suggests.
A problem with multi-product cost functions is that it is not possible to define meaningfully what the AC of a particular product is. If different products share some inputs, the costs of those inputs cannot be solely attributed to any one of them. […] In practice, when multi-product organisations such as hospitals calculate costs for particular products, they use accounting rules to share out the costs of all inputs and calculate average not marginal costs.”

“Studies of economies of scale in the health sector do not give a consistent and generalisable picture. […] studies of scope economies [also] do not show any consistent and generalisable picture. […] The impact of hospital ownership type on a range of key outcomes is generally ambiguous, with different studies yielding conflicting results. […] The association between hospital ownership and patient outcomes is unclear. The evidence is mixed and inconclusive regarding the impact of hospital ownership on access to care, morbidity, mortality, and adverse events.

“Public goods are goods that are consumed jointly by all consumers. The strict economics definition of a public good is that they have two characteristics. The first is non-rivalry. This means that the consumption of a good or service by one person does not prevent anyone else from consuming it. Non-rival goods therefore have large marginal external benefits, which make them socially very desirable but privately unprofitable to provide. Examples of nonrival goods are street lighting and pavements. The second is non-excludability. This means that it is not possible to provide a good or service to one person without letting others also consume it. […] This may lead to a free-rider problem, in which people are unwilling to pay for goods and services that are of value to them. […] Note the distinction between public goods, which are goods and services that are non-rival and non-excludable, and publicly provided goods, which are goods or services that are provided by the government for any reason. […] Most health care products and services are not public goods because they are both rival and excludable. […] However, some health care, particularly public health programmes, does have public good properties.”

“[H]ealth care is typically consumed under conditions of uncertainty with respect to the timing of health care expenditure […] and the amount of expenditure on health care that is required […] The usual solution to such problems is insurance. […] Adverse selection exists when exactly the wrong people, from the point of view of the insurance provider, choose to buy insurance: those with high risks. […] Those who are most likely to buy health insurance are those who have a relatively high probability of becoming ill and maybe also incur greater costs than the average when they are ill. […] Adverse selection arises because of the asymmetry of information between insured and insurer. […] Two approaches are adopted to prevent adverse selection. The first is experience rating, where the insurance provider sets a different insurance premium for different risk groups. Those who apply for health insurance might be asked to undergo a medical examination and
to disclose any relevant facts concerning their risk status.
[…] There are two problems with this approach. First, the cost of acquiring the appropriate information may be high. […] Secondly, it might encourage insurance providers to ‘cherry pick’ people, only choosing to provide insurance to the low risk. This may mean that high-risk people are unable to obtain health insurance at all. […] The second approach is to make health insurance compulsory. […] The problem with this is that low-risk people effectively subsidise the health insurance payments of those with higher risks, which may be regarded […] as inequitable.”

“Health insurance changes the economic incentives facing both the consumers and the providers of health care. One manifestation of these changes is the existence of moral hazard. This is a phenomenon common to all forms of insurance. The suggestion is that when people are insured against risks and their consequences, they are less careful about minimising them. […] Moral hazard arises when it is possible to alter the probability of the insured event, […] or the size of the insured loss […] The extent of the problem depends on the price elasticity of demand […] Three main mechanisms can be used to reduce moral hazard. The first is co-insurance. Many insurance policies require that when an event occurs the insured shares the insured loss […] with the insurer. The co-insurance rate is the percentage of the insured loss that is paid by the insured. The co-payment is the amount that they pay. […] The second is deductibles. A deductible is an amount of money the insured pays when a claim is made irrespective of co-insurance. The insurer will not pay the insured loss unless the deductible is paid by the insured. […] The third is no-claims bonuses. These are payments made by insurers to discourage claims. They usually take the form of reduced insurance premiums in the next period. […] No-claims bonuses typically discourage insurance claims where the payout by the insurer is small.

“The method of reimbursement relates to the way in which health care providers are paid for the services they provide. It is useful to distinguish between reimbursement methods, because they can affect the quantity and quality of health care. […] Retrospective reimbursement at full cost means that hospitals receive payment in full for all health care expenditures incurred in some pre-specified period of time. Reimbursement is retrospective in the sense that not only are hospitals paid after they have provided treatment, but also in that the size of the payment is determined after treatment is provided. […] Which model is used depends on whether hospitals are reimbursed for actual costs incurred, or on a fee-for-service (FFS) basis. […] Since hospital income [in these models] depends on the actual costs incurred (actual costs model) or on the volume of services provided (FFS model) there are few incentives to minimise costs. […] Prospective reimbursement implies that payments are agreed in advance and are not directly related to the actual costs incurred. […] incentives to reduce costs are greater, but payers may need to monitor the quality of care provided and access to services. If the hospital receives the same income regardless of quality, there is a financial incentive to provide low-quality care […] The problem from the point of view of the third-party payer is how best to monitor the activities of health care providers, and how to encourage them to act in a mutually beneficial way. This problem might be reduced if health care providers and third-party payers are linked in some way so that they share common goals. […] Integration between third-party payers and health care providers is a key feature of managed care.



One of the prospective imbursement models applied today may be of particular interest to Danes, as the DRG system is a big part of the financial model of the Danish health care system – so I’ve added a few details about this type of system below:

An example of prospectively set costs per case is the diagnostic-related groups (DRG) pricing scheme introduced into the Medicare system in the USA in 1984, and subsequently used in a number of other countries […] Under this scheme, DRG payments are based on average costs per case in each diagnostic group derived from a sample of hospitals. […] Predicted effects of the DRG pricing scheme are cost shifting, patient shifting and DRG creep. Cost shifting and patient shifting are ways of circumventing the cost-minimising effects of DRG pricing by shifting patients or some of the services provided to patients out of the DRG pricing scheme and into other parts of the system not covered by DRG pricing. For example, instead of being provided on an inpatient basis, treatment might be provided on an outpatient basis where it is reimbursed retrospectively. DRG creep arises when hospitals classify cases into DRGs that carry a higher payment, indicating that they are more complicated than they really are. This might arise, for instance, when cases have multiple diagnoses.”

February 20, 2017 Posted by | Books, Economics, health care | Leave a comment

Rocks: A very short introduction

I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.

“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”

“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.

The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”

“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”

“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”

“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”

“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”

“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”

“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”

Some links of interest:

Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). BrecciaAeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.

February 19, 2017 Posted by | Biology, Books, Geology | Leave a comment

Anesthesia

“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

Anaesthesia.
General anaesthesia.
Muscle relaxant.
Nociception.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Laudanum.
Dwale.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
Chloroform.
James Young Simpson.
Joseph Thomas Clover.
Barbiturates.
Inhalational anaesthetic.
Antisialagogue.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Propofol.
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngoscopy.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Etomidate.
Ketamine.
Neuromuscular-blocking drug.
Neostigmine.
Sugammadex.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Amylocaine.
Procaine.
Lidocaine.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | Books, Chemistry, Medicine, Papers, Pharmacology | Leave a comment

Particle Physics

20090213

20090703

(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)

The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.

Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.

“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.

In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.

The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”

“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”

“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”

“About 400 billion neutrinos from the Sun pass through each one of us each second.”

“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”

Matter.
Particle.
Particle physics.
Strong interaction.
Weak interaction (‘good article’).
Electron (featured).
Quark (featured).
Fundamental interactions.
Electronvolt.
Electromagnetic spectrum.
Cathode ray.
Alpha particle.
Cloud chamber.
Atomic spectroscopy.
Ionization.
Resonance (particle physics).
Spin (physics).
Beta decay.
Neutrino.
Neutrino astronomy.
Antiparticle.
Baryon/meson.
Pion.
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Collider.
B-factory.
Particle detector.
Cherenkov radiation.
Sudbury Neutrino Observatory.
Quantum chromodynamics.
Color charge.
Force carrier.
W and Z bosons.
Electroweak interaction (/theory).
Exotic matter.
Strangeness.
Strange quark.
Charm (quantum number).
Antimatter.
Inverse beta decay.
Dark matter.
Standard model.
Supersymmetry.
Higgs boson.
Quark–gluon plasma.
CP violation.

February 9, 2017 Posted by | Books, Physics | Leave a comment