Econstudentlog

James Simons interview


James Simons. Differential geometry. Minimal varieties in riemannian manifolds. Shiing-Shen Chern. Characteristic Forms and Geometric Invariants. Renaissance Technologies.

“That’s really what’s great about basic science and in this case mathematics, I mean, I didn’t know any physics. It didn’t occur to me that this material, that Chern and I had developed would find use somewhere else altogether. This happens in basic science all the time that one guy’s discovery leads to someone else’s invention and leads to another guy’s machine or whatever it is. Basic science is the seed corn of our knowledge of the world. …I loved the subject, but I liked it for itself, I wasn’t thinking of applications. […] the government’s not doing such a good job at supporting basic science and so there’s a role for philanthropy, an increasingly important role for philanthropy.”

“My algorithm has always been: You put smart people together, you give them a lot of freedom, create an atmosphere where everyone talks to everyone else. They’re not hiding in the corner with their own little thing. They talk to everybody else. And you provide the best infrastructure. The best computers and so on that people can work with and make everyone partners.”

“We don’t have enough teachers of mathematics who know it, who know the subject … and that’s for a simple reason: 30-40 years ago, if you knew some mathematics, enough to teach in let’s say high school, there weren’t a million other things you could do with that knowledge. Oh yeah, maybe you could become a professor, but let’s suppose you’re not quite at that level but you’re good at math and so on.. Being a math teacher was a nice job. But today if you know that much mathematics, you can get a job at Google, you can get a job at IBM, you can get a job in Goldman Sachs, I mean there’s plenty of opportunities that are going to pay more than being a high school teacher. There weren’t so many when I was going to high school … so the quality of high school teachers in math has declined, simply because if you know enough to teach in high school you know enough to work for Google…”

January 12, 2021 Posted by | Mathematics, Papers, Science | Leave a comment

Artificial intelligence (I?)

This book was okay, but nothing all that special. In my opinion there’s too much philosophy and similar stuff in there (‘what does intelligence really mean anyway?’), and the coverage isn’t nearly as focused on technological aspects as e.g. Winfield’s (…in my opinion better…) book from the same series on robotics (which I covered here) was; I am certain I’d have liked this book better if it’d provided a similar type of coverage as did Winfield, but it didn’t. However it’s far from terrible and I liked the authors skeptical approach to e.g. singularitarianism. Below I have added some quotes and links, as usual.

“Artificial intelligence (AI) seeks to make computers do the sorts of things that minds can do. Some of these (e.g. reasoning) are normally described as ‘intelligent’. Others (e.g. vision) aren’t. But all involve psychological skills — such as perception, association, prediction, planning, motor control — that enable humans and animals to attain their goals. Intelligence isn’t a single dimension, but a richly structured space of diverse information-processing capacities. Accordingly, AI uses many different techniques, addressing many different tasks. […] although AI needs physical machines (i.e. computers), it’s best thought of as using what computer scientists call virtual machines. A virtual machine isn’t a machine depicted in virtual reality, nor something like a simulated car engine used to train mechanics. Rather, it’s the information-processing system that the programmer has in mind when writing a program, and that people have in mind when using it. […] Virtual machines in general are comprised of patterns of activity (information processing) that exist at various levels. […] the human mind can be understood as the virtual machine – or rather, the set of mutually interacting virtual machines, running in parallel […] – that is implemented in the brain. Progress in AI requires progress in defining interesting/useful virtual machines. […] How the information is processed depends on the virtual machine involved. [There are many different approaches.] […] In brief, all the main types of AI were being thought about, and even implemented, by the late 1960s – and in some cases, much earlier than that. […] Neural networks are helpful for modelling aspects of the brain, and for doing pattern recognition and learning. Classical AI (especially when combined with statistics) can model learning too, and also planning and reasoning. Evolutionary programming throws light on biological evolution and brain development. Cellular automata and dynamical systems can be used to model development in living organisms. Some methodologies are closer to biology than to psychology, and some are closer to non-reflective behaviour than to deliberative thought. To understand the full range of mentality, all of them will be needed […]. Many AI researchers [however] don’t care about how minds work: they seek technological efficiency, not scientific understanding. […] In the 21st century, […] it has become clear that different questions require different types of answers”.

“State-of-the-art AI is a many-splendoured thing. It offers a profusion of virtual machines, doing many different kinds of information processing. There’s no key secret here, no core technique unifying the field: AI practitioners work in highly diverse areas, sharing little in terms of goals and methods. […] A host of AI applications exist, designed for countless specific tasks and used in almost every area of life, by laymen and professionals alike. Many outperform even the most expert humans. In that sense, progress has been spectacular. But the AI pioneers weren’t aiming only for specialist systems. They were also hoping for systems with general intelligence. Each human-like capacity they modelled — vision, reasoning, language, learning, and so on — would cover its entire range of challenges. Moreover, these capacities would be integrated when appropriate. Judged by those criteria, progress has been far less impressive. […] General intelligence is still a major challenge, still highly elusive. […] problems can’t always be solved merely by increasing computer power. New problem-solving methods are often needed. Moreover, even if a particular method must succeed in principle, it may need too much time and/or memory to succeed in practice. […] Efficiency is important, too: the fewer the number of computations, the better. In short, problems must be made tractable. There are several basic strategies for doing that. All were pioneered by classical symbolic AI, or GOFAI, and all are still essential today. One is to direct attention to only a part of the search space (the computer’s representation of the problem, within which the solution is assumed to be located). Another is to construct a smaller search space by making simplifying assumptions. A third is to order the search efficiently. Yet another is to construct a different search space, by representing the problem in a new way. These approaches involve heuristics, planning, mathematical simplification, and knowledge representation, respectively. […] Often, the hardest part of AI problem solving is presenting the problem to the system in the first place. […] the information (‘knowledge’) concerned must be presented to the system in a fashion that the machine can understand – in other words, that it can deal with. […] AI’s way of doing this are highly diverse.”

“The rule-baed form of knowledge representation enables programs to be built gradually, as the programmer – or perhaps an AGI system itself – learns more about the domain. A new rule can be added at any time. There’s no need to rewrite the program from scratch. However, there’s a catch. If the new rule isn’t logically consistent with the existing ones, the system won’t always do what it’s supposed to do. It may not even approximate what it’s supposed to do. When dealing with a small set of rules, such logical conflicts are easily avoided, but larger systems are less transparent. […] An alternative form of knowledge representation for concepts is semantic networks […] A semantic network links concepts by semantic relations […] semantic networks aren’t the same thing as neural networks. […] distributed neural networks represent knowledge in a very different way. There, individual concepts are represented not by a single node in a carefully defined associative net, but by the changing patterns of activity across an entire network. Such systems can tolerate conflicting evidence, so aren’t bedevilled by the problems of maintaining logical consistency […] Even a single mind involves distributed cognition, for it integrates many cognitive, motivational, and emotional subsystems […] Clearly, human-level AGI would involve distributed cognition.”

“In short, most human visual achievements surpass today’s AI. Often, AI researchers aren’t clear about what questions to ask. For instance, think about folding a slippery satin dress neatly. No robot can do this (although some can be instructed, step by step, how to fold an oblong terry towel). Or consider putting on a T-shirt: the head must go in first, and not via a sleeve — but why? Such topological problems hardly feature in AI. None of this implies that human-level computer vision is impossible. But achieving it is much more difficult than most people believe. So this is a special case of the fact noted in Chapter 1: that AI has taught us that human minds are hugely richer, and more subtle, than psychologists previously imagined. Indeed, that is the main lesson to be learned from AI. […] Difficult though it is to build a high-performing AI specialist, building an AI generalist is orders of magnitude harder. (Deep learning isn’t the answer: its aficionados admit that ‘new paradigms are needed’ to combine it with complex reasoning — scholarly code for ‘we haven’t got a clue’.) That’s why most AI researchers abandoned that early hope, turning instead to multifarious narrowly defined tasks—often with spectacular success.”

“Some machine learning uses neural networks. But much relies on symbolic AI, supplemented by powerful statistical algorithms. In fact, the statistics really do the work, the GOFAI merely guiding the worker to the workplace. Accordingly, some professionals regard machine learning as computer science and/or statistics —not AI. However, there’s no clear boundary here. Machine learning has three broad types: supervised, unsupervised, and reinforcement learning. […] In supervised learning, the programmer ‘trains’ the system by defining a set of desired outcomes for a range of inputs […], and providing continual feedback about whether it has achieved them. The learning system generates hypotheses about the relevant features. Whenever it classifies incorrectly, it amends its hypothesis accordingly. […] In unsupervised learning, the user provides no desired outcomes or error messages. Learning is driven by the principle that co-occurring features engender expectations that they will co-occur in future. Unsupervised learning can be used to discover knowledge. The programmers needn’t know what patterns/clusters exist in the data: the system finds them for itself […but even though Boden does not mention this fact, caution is most definitely warranted when applying such systems/methods to data (..it remains true that “Truth and true models are not statistically identifiable from data” – as usual, the go-to reference here is Burnham & Anderson)]. Finally, reinforcement learning is driven by analogues of reward and punishment: feedback messages telling the system that what it just did was good or bad. Often, reinforcement isn’t simply binary […] Given various theories of probability, there are many different algorithms suitable for distinct types of learning and different data sets.”

“Countless AI applications use natural language processing (NLP). Most focus on the computer’s ‘understanding’ of language that is presented to it, not on its own linguistic production. That’s because NLP generation is even more difficult than NLP acceptance [I had a suspicion this might be the case before reading the book, but I didn’t know – US]. […] It’s now clear that handling fancy syntax isn’t necessary for summarizing, questioning, or translating a natural-language text. Today’s NLP relies more on brawn (computational power) than on brain (grammatical analysis). Mathematics — specifically, statistics — has overtaken logic, and machine learning (including, but not restricted to, deep learning) has displaced syntactic analysis. […] In modern-day NLP, powerful computers do statistical searches of huge collections (‘corpora’) of texts […] to find word patterns both commonplace and unexpected. […] In general […], the focus is on words and phrases, not syntax. […] Machine-matching of languages from different language groups is usually difficult. […] Human judgements of relevance are often […] much too subtle for today’s NLP. Indeed, relevance is a linguistic/conceptual version of the unforgiving ‘frame problem‘ in robotics […]. Many people would argue that it will never be wholly mastered by a non-human system.”

“[M]any AI research groups are now addressing emotion. Most (not quite all) of this research is theoretically shallow. And most is potentially lucrative, being aimed at developing ‘computer companions’. These are AI systems — some screen-based, some ambulatory robots — designed to interact with people in ways that (besides being practically helpful) are affectively comfortable, even satisfying, for the user. Most are aimed at the elderly and/or disabled, including people with incipient dementia. Some are targeted on babies or infants. Others are interactive ‘adult toys’. […] AI systems can already recognize human emotions in various ways. Some are physiological: monitoring the person’s breathing rate and galvanic skin response. Some are verbal: noting the speaker’s speed and intonation, as well as their vocabulary. And some are visual: analysing their facial expressions. At present, all these methods are relatively crude. The user’s emotions are both easily missed and easily misinterpreted. […] [An] point [point], here [in the development and evaluation of AI], is that emotions aren’t merely feelings. They involve functional, as well as phenomenal, consciousness […]. Specifically, they are computational mechanisms that enable us to schedule competing motives – and without which we couldn’t function. […] If we are ever to achieve AGI, emotions such as anxiety will have to be included – and used.”

[The point made in the book is better made in Aureli et al.‘s book, especially the last chapters to which the coverage in the linked post refer. The point is that emotions enable us to make better decisions, or perhaps even to make a decision in the first place; the emotions we feel in specific contexts will tend not to be even remotely random, rather they will tend to a significant extent to be Nature’s (…and Mr. Darwin’s) attempt to tell us how to handle a specific conflict of interest in the ‘best’ manner. You don’t need to do the math, your forebears did it for you, which is why you’re now …angry, worried, anxious, etc. If you had to do the math every time before you made a decision, you’d be in trouble, and emotions provide a great shortcut in many contexts. The potential for such short-cuts seems really important if you want an agent to act intelligently, regardless of whether said agent is ‘artificial’ or not. The book very briefly mentions a few of Minsky’s thoughts on these topics, and people who are curious could probably do worse than read some of his stuff. This book seems like a place to start.]

Links:

GOFAI (“Good Old-Fashioned Artificial Intelligence”).
Ada Lovelace. Charles Babbage. Alan Turing. Turing machine. Turing test. Norbert WienerJohn von Neumann. W. Ross Ashby. William Grey Walter. Oliver SelfridgeKenneth Craik. Gregory Bateson. Frank Rosenblatt. Marvin Minsky. Seymour Papert.
A logical calculus of the ideas immanent in nervous activity (McCulloch & Pitts, 1943).
Propositional logic. Logic gate.
Arthur Samuel’s checkers player. Logic Theorist. General Problem Solver. The Homeostat. Pandemonium architecture. Perceptron. Cyc.
Fault-tolerant computer system.
Cybernetics.
Programmed Data Processor (PDP).
Artificial life.
Forward chaining. Backward chaining.
Rule-based programming. MYCIN. Dendral.
Semantic network.
Non-monotonic logic. Fuzzy logic.
Facial recognition system. Computer vision.
Bayesian statistics.
Helmholtz machine.
DQN algorithm.
AlphaGo. AlphaZero.
Human Problem Solving (Newell & Simon, 1970).
ACT-R.
NELL (Never-Ending Language Learning).
SHRDLU.
ALPAC.
Google translate.
Data mining. Sentiment analysis. Siri. Watson (computer).
Paro (robot).
Uncanny valley.
CogAff architecture.
Connectionism.
Constraint satisfaction.
Content-addressable memory.
Graceful degradation.
Physical symbol system hypothesis.

January 10, 2019 Posted by | Biology, Books, Computer science, Engineering, Language, Mathematics, Papers, Psychology, Statistics | Leave a comment

Oceans (I)

I read this book quite some time ago, but back when I did I never blogged it; instead I just added a brief review on goodreads. I remember that the main reason why I decided against blogging it shortly after I’d read it was that the coverage overlapped a great deal with Mladenov’s marine biology text, which I had at that time just read and actually did blog in some detail. I figured if I wanted to blog this book as well I would be well-advised to wait a while, so that I’d at least have forget some of the stuff first – that way blogging the book might end up serving as a review of stuff I’d forgot, rather than as a review of stuff that would still be fresh in my memory and so wouldn’t really be worth reviewing anyway. So now here we are a few months later, and I have come to think it might be a good idea to blog the book.

Below I have added some quotes from the first half of the book and some links to topics/people/etc. covered.

“Several methods now exist for calculating the rate of plate motion. Most reliable for present-day plate movement are direct observations made using satellites and laser technology. These show that the Atlantic Ocean is growing wider at a rate of between 2 and 4 centimetres per year (about the rate at which fingernails grow), the Indian Ocean is attempting to grow at a similar rate but is being severely hampered by surrounding plate collisions, while the fastest spreading centre is the East Pacific Rise along which ocean crust is being created at rates of around 17 centimetres per year (the rate at which hair grows). […] The Nazca plate has been plunging beneath South America for at least 200 million years – the imposing Andes, the longest mountain chain on Earth, is the result. […] By around 120 million years ago, South America and Africa began to drift apart and the South Atlantic was born. […] sea levels rose higher than at any time during the past billion years, perhaps as much as 350 metres higher than today. Only 18 per cent of the globe was dry land — 82 per cent was under water. These excessively high sea levels were the result of increased spreading activity — new oceans, new ridges, and faster spreading rates all meant that the mid-ocean ridge systems collectively displaced a greater volume of water than ever before. Global warming was far more extreme than today. Temperatures in the ocean rose to around 30°C at the equator and as much as 14°C at the poles. Ocean circulation was very sluggish.”

“The land–ocean boundary is known as the shoreline. Seaward of this, all continents are surrounded by a broad, flat continental shelf, typically 10–100 kilometres wide, which slopes very gently (less than one-tenth of a degree) to the shelf edge at a water depth of around 100 metres. Beyond this the continental slope plunges to the deep-ocean floor. The slope is from tens to a few hundred kilometres wide and with a mostly gentle gradient of 3–8 degrees, but locally steeper where it is affected by faulting. The base of slope abuts the abyssal plain — flat, almost featureless expanses between 4 and 6 kilometres deep. The oceans are compartmentalized into abyssal basins separated by submarine mountain ranges and plateaus, which are the result of submarine volcanic outpourings. Those parts of the Earth that are formed of ocean crust are relatively lower, because they are made up of denser rocks — basalts. Those formed of less dense rocks (granites) of the continental crust are relatively higher. Seawater fills in the deeper parts, the ocean basins, to an average depth of around 4 kilometres. In fact, some parts are shallower because the ocean crust is new and still warm — these are the mid-ocean ridges at around 2.5 kilometres — whereas older, cooler crust drags the seafloor down to a depth of over 6 kilometres. […] The seafloor is almost entirely covered with sediment. In places, such as on the flanks of mid-ocean ridges, it is no more than a thin veneer. Elsewhere, along stable continental margins or beneath major deltas where deposition has persisted for millions of years, the accumulated thickness can exceed 15 kilometres. These areas are known as sedimentary basins“.

“The super-efficiency of water as a solvent is due to an asymmetrical bonding between hydrogen and oxygen atoms. The resultant water molecule has an angular or kinked shape with weakly charged positive and negative ends, rather like magnetic poles. This polar structure is especially significant when water comes into contact with substances whose elements are held together by the attraction of opposite electrical charges. Such ionic bonding is typical of many salts, such as sodium chloride (common salt) in which a positive sodium ion is attracted to a negative chloride ion. Water molecules infiltrate the solid compound, the positive hydrogen end being attracted to the chloride and the negative oxygen end to the sodium, surrounding and then isolating the individual ions, thereby disaggregating the solid [I should mention that if you’re interested in knowing (much) more this topic, and closely related topics, this book covers these things in great detail – US]. An apparently simple process, but extremely effective. […] Water is a super-solvent, absorbing gases from the atmosphere and extracting salts from the land. About 3 billion tonnes of dissolved chemicals are delivered by rivers to the oceans each year, yet their concentration in seawater has remained much the same for at least several hundreds of millions of years. Some elements remain in seawater for 100 million years, others for only a few hundred, but all are eventually cycled through the rocks. The oceans act as a chemical filter and buffer for planet Earth, control the distribution of temperature, and moderate climate. Inestimable numbers of calories of heat energy are transferred every second from the equator to the poles in ocean currents. But, the ocean configuration also insulates Antarctica and allows the build-up of over 4000 metres of ice and snow above the South Pole. […] Over many aeons, the oceans slowly accumulated dissolved chemical ions (and complex ions) of almost every element present in the crust and atmosphere. Outgassing from the mantle from volcanoes and vents along the mid-ocean ridges contributed a variety of other elements […] The composition of the first seas was mostly one of freshwater together with some dissolved gases. Today, however, the world ocean contains over 5 trillion tonnes of dissolved salts, and nearly 100 different chemical elements […] If the oceans’ water evaporated completely, the dried residue of salts would be equivalent to a 45-metre-thick layer over the entire planet.”

“The average time a single molecule of water remains in any one reservoir varies enormously. It may survive only one night as dew, up to a week in the atmosphere or as part of an organism, two weeks in rivers, and up to a year or more in soils and wetlands. Residence times in the oceans are generally over 4000 years, and water may remain in ice caps for tens of thousands of years. Although the ocean appears to be in a steady state, in which both the relative proportion and amounts of dissolved elements per unit volume are nearly constant, this is achieved by a process of chemical cycles and sinks. The input of elements from mantle outgassing and continental runoff must be exactly balanced by their removal from the oceans into temporary or permanent sinks. The principal sink is the sediment and the principal agent removing ions from solution is biological. […] The residence times of different elements vary enormously from tens of millions of years for chloride and sodium, to a few hundred years only for manganese, aluminium, and iron. […] individual water molecules have cycled through the atmosphere (or mantle) and returned to the seas more than a million times since the world ocean formed.”

“Because of its polar structure and hydrogen bonding between individual molecules, water has both a high capacity for storing large amounts of heat and one of the highest specific heat values of all known substances. This means that water can absorb (or release) large amounts of heat energy while changing relatively little in temperature. Beach sand, by contrast, has a specific heat five times lower than water, which explains why, on sunny days, beaches soon become too hot to stand on with bare feet while the sea remains pleasantly cool. Solar radiation is the dominant source of heat energy for the ocean and for the Earth as a whole. The differential in solar input with latitude is the main driver for atmospheric winds and ocean currents. Both winds and especially currents are the prime means of mitigating the polar–tropical heat imbalance, so that the polar oceans do not freeze solid, nor the equatorial oceans gently simmer. For example, the Gulf Stream transports some 550 trillion calories from the Caribbean Sea across the North Atlantic each second, and so moderates the climate of north-western Europe.”

“[W]hy is [the sea] mostly blue? The sunlight incident on the sea has a full spectrum of wavelengths, including the rainbow of colours that make up the visible spectrum […] The longer wavelengths (red) and very short (ultraviolet) are preferentially absorbed by water, rapidly leaving near-monochromatic blue light to penetrate furthest before it too is absorbed. The dominant hue that is backscattered, therefore, is blue. In coastal waters, suspended sediment and dissolved organic debris absorb additional short wavelengths (blue) resulting in a greener hue. […] The speed of sound in seawater is about 1500 metres per second, almost five times that in air. It is even faster where the water is denser, warmer, or more salty and shows a slow but steady increase with depth (related to increasing water pressure).”

“From top to bottom, the ocean is organized into layers, in which the physical and chemical properties of the ocean – salinity, temperature, density, and light penetration – show strong vertical segregation. […] Almost all properties of the ocean vary in some way with depth. Light penetration is attenuated by absorption and scattering, giving an upper photic and lower aphotic zone, with a more or less well-defined twilight region in between. Absorption of incoming solar energy also preferentially heats the surface waters, although with marked variations between latitudes and seasons. This results in a warm surface layer, a transition layer (the thermocline) through which the temperature decreases rapidly with depth, and a cold deep homogeneous zone reaching to the ocean floor. Exactly the same broad three-fold layering is true for salinity, except that salinity increases with depth — through the halocline. The density of seawater is controlled by its temperature, salinity, and pressure, such that colder, saltier, and deeper waters are all more dense. A rapid density change, known as the pycnocline, is therefore found at approximately the same depth as the thermocline and halocline. This varies from about 10 to 500 metres, and is often completely absent at the highest latitudes. Winds and waves thoroughly stir and mix the upper layers of the ocean, even destroying the layered structure during major storms, but barely touch the more stable, deep waters.”

Links:

Arvid Pardo. Law of the Sea Convention.
Polynesians.
Ocean exploration timeline (a different timeline is presented in the book, but there’s some overlap). Age of Discovery. Vasco da Gama. Christopher Columbus. John Cabot. Amerigo Vespucci. Ferdinand Magellan. Luigi Marsigli. James Cook.
HMS Beagle. HMS Challenger. Challenger expedition.
Deep Sea Drilling Project. Integrated Ocean Drilling Program. Joides resolution.
World Ocean.
Geological history of Earth (this article of course covers much more than is covered in the book, but the book does cover some highlights). Plate tectonics. Lithosphere. Asthenosphere. Convection. Global mid-ocean ridge system.
Pillow lava. Hydrothermal vent. Hot spring.
Ophiolite.
Mohorovičić discontinuity.
Mid-Atlantic Ridge. Subduction zone. Ring of Fire.
Pluton. Nappe. Mélange. Transform fault. Strike-slip fault. San Andreas fault.
Paleoceanography. Tethys Ocean. Laurasia. Gondwana.
Oceanic anoxic event. Black shale.
Seabed.
Bengal Fan.
Fracture zone.
Seamount.
Terrigenous sediment. Biogenic and chemogenic sediment. Halite. Gypsum.
Carbonate compensation depth.
Laurentian fan.
Deep-water sediment waves. Submarine landslide. Turbidity current.
Water cycle.
Ocean acidification.
Timing and Climatic Consequences ofthe Opening of Drake Passage. The Opening of the Tasmanian Gateway Drove Global Cenozoic Paleoclimatic and Paleoceanographic Changes (report)Antarctic Circumpolar Current.
SOFAR channel.
Bathymetry.

June 18, 2018 Posted by | Books, Chemistry, Geology, Papers, Physics | Leave a comment

Mathematics in Cryptography II

Some links to stuff covered in the lecture:

Public-key cryptography.
New Directions in Cryptography (Diffie & Hellman, 1976).
The history of Non-Secret Encryption (James Ellis).
Note on “Non-Secret Encryption” – Cliff Cocks (1973).
RSA (cryptosystem).
Discrete Logarithm Problem.
Diffie–Hellman key exchange.
AES (Advanced Encryption Standard).
Triple DES.
Trusted third party (TTP).
Key management.
Man-in-the-middle attack.
Digital signature.
Public key certificate.
Secret sharing.
Hash function. Cryptographic hash function.
Secure Hash Algorithm 2 (SHA-2).
Non-repudiation (digital security).
L-notation. L (complexity).
ElGamal signature scheme.
Digital Signature Algorithm (DSA).
Schnorr signature.
Identity-based cryptography.
Identity-Based Cryptosystems and Signature Schemes (Adi Shamir, 1984).
Algorithms for Quantum Computation: Discrete Logarithms and Factoring (Peter Shor, 1994).
Quantum resistant cryptography.
Elliptic curve. Elliptic-curve cryptography.
Projective space.

I have included very few links relating to the topics covered in the last part of the lecture. This was deliberate and not just a result of the type of coverage included in that part of the lecture. In my opinion non-mathematicians should probably skip the last 25 minutes or so as they’re – not only due to technical issues (the lecturer is writing stuff on the blackboard and for several minutes you’re unable to see what she’s writing, which is …unfortunate), but those certainly were not helping – not really worth the effort. The first hour of the lecture is great, the last 25 minutes are, well, less great, in my opinion. You should however not miss the first part of the coverage of ECC-related stuff (in particular the coverage ~55-58 minutes in), if you’re interested in making sense of how ECC works; I certainly found that part of the coverage very helpful.

June 2, 2018 Posted by | Computer science, Cryptography, Lectures, Mathematics, Papers | Leave a comment

On the cryptographic hardness of finding a Nash equilibrium

I found it annoying that you generally can’t really hear the questions posed by the audience (which includes people like Avi Wigderson), especially considering that there are quite a few of these, especially in the middle section of the lecture. There are intermittent issues with the camera’s focus occasionally throughout the talk, but those are all transitory problems that should not keep you from watching the lecture. The sound issue at the beginning of the talk is resolved after 40 seconds.

One important take-away from this talk, if you choose not to watch it: “to date, there is no known efficient algorithm to find Nash equilibrium in games”. In general this paper – coauthored by the lecturer – seems from a brief skim to cover many of the topics also included in the lecture. I have added some other links to articles and topics covered/mentioned in the lecture below.

Nash’s Existence Theorem.
Reducibility Among Equilibrium Problems (Goldberg & Papadimitriou).
Three-Player Games Are Hard (Daskalakis & Papadimitriou).
3-Nash is PPAD-Complete (Chen & Deng).
PPAD (complexity).
NP-hardness.
On the (Im)possibility of Obfuscating Programs (Barak et al.).
On the Impossibility of Obfuscation with Auxiliary Input (Goldwasser & Kalai).
On Best-Possible Obfuscation (Goldwasser & Rothblum).
Functional Encryption without Obfuscation (Garg et al.).
On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence (Papadimitriou).
Pseudorandom function family.
Revisiting the Cryptographic Hardness of Finding a Nash Equilibrium (Garg, Pandei & Srinivasan).
Constrained Pseudorandom Functions and Their Applications (Boneh & Waters).
Delegatable Pseudorandom Functions and Applications (Kiayias et al.).
Functional Signatures and Pseudorandom Functions (Boyle, Goldwasser & Ivan).
Universal Constructions and Robust Combiners for Indistinguishability Obfuscation and Witness Encryption (Ananth et al.).

April 18, 2018 Posted by | Computer science, Cryptography, Game theory, Lectures, Mathematics, Papers | Leave a comment

Beyond Significance Testing (IV)

Below I have added some quotes from chapters 5, 6, and 7 of the book.

“There are two broad classes of standardized effect sizes for analysis at the group or variable level, the d family, also known as group difference indexes, and the r family, or relationship indexes […] Both families are metric- (unit-) free effect sizes that can compare results across studies or variables measured in different original metrics. Effect sizes in the d family are standardized mean differences that describe mean contrasts in standard deviation units, which can exceed 1.0 in absolute value. Standardized mean differences are signed effect sizes, where the sign of the statistic indicates the direction of the corresponding contrast. Effect sizes in the r family are scaled in correlation units that generally range from 1.0 to +1.0, where the sign indicates the direction of the relation […] Measures of association are unsigned effect sizes and thus do not indicate directionality.”

“The correlation rpb is for designs with two unrelated samples. […] rpb […] is affected by base rate, or the proportion of cases in one group versus the other, p and q. It tends to be highest in balanced designs. As the design becomes more unbalanced holding all else constant, rpb approaches zero. […] rpb is not directly comparable across studies with dissimilar relative group sizes […]. The correlation rpb is also affected by the total variability (i.e., ST). If this variation is not constant over samples, values of rpb may not be directly comparable.”

“Too many researchers neglect to report reliability coefficients for scores analyzed. This is regrettable because effect sizes cannot be properly interpreted without knowing whether the scores are precise. The general effect of measurement error in comparative studies is to attenuate absolute standardized effect sizes and reduce the power of statistical tests. Measurement error also contributes to variation in observed results over studies. Of special concern is when both score reliabilities and sample sizes vary from study to study. If so, effects of sampling error are confounded with those due to measurement error. […] There are ways to correct some effect sizes for measurement error (e.g., Baguley, 2009), but corrected effect sizes are rarely reported. It is more surprising that measurement error is ignored in most meta-analyses, too. F. L. Schmidt (2010) found that corrected effect sizes were analyzed in only about 10% of the 199 meta-analytic articles published in Psychological Bulletin from 1978 to 2006. This implies that (a) estimates of mean effect sizes may be too low and (b) the wrong statistical model may be selected when attempting to explain between-studies variation in results. If a fixed
effects model is mistakenly chosen over a random effects model, confidence intervals based on average effect sizes tend to be too narrow, which can make those results look more precise than they really are. Underestimating mean effect sizes while simultaneously overstating their precision is a potentially serious error.”

“[D]emonstration of an effect’s significance — whether theoretical, practical, or clinical — calls for more discipline-specific expertise than the estimation of its magnitude”.

“Some outcomes are categorical instead of continuous. The levels of a categorical outcome are mutually exclusive, and each case is classified into just one level. […] The risk difference (RD) is defined as pCpT, and it estimates the parameter πC πT. [Those ‘n-resembling letters’ is how wordpress displays pi; this is one of an almost infinite number of reasons why I detest blogging equations on this blog and usually do not do this – US] […] The risk ratio (RR) is the ratio of the risk rates […] which rate appears in the numerator versus the denominator is arbitrary, so one should always explain how RR is computed. […] The odds ratio (OR) is the ratio of the within-groups odds for the undesirable event. […] A convenient property of OR is that it can be converted to a kind of standardized mean difference known as logit d (Chinn, 2000). […] Reporting logit d may be of interest when the hypothetical variable that underlies the observed dichotomy is continuous.”

“The risk difference RD is easy to interpret but has a drawback: Its range depends on the values of the population proportions πC and πT. That is, the range of RD is greater when both πC and πT are closer to .50 than when they are closer to either 0 or 1.00. The implication is that RD values may not be comparable across different studies when the corresponding parameters πC and πT are quite different. The risk ratio RR is also easy to interpret. It has the shortcoming that only the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.00 to infinity is theoretically available for describing higher risk in the same group. The range of RR varies according to its denominator. This property limits the value of RR for comparing results across different studies. […] The odds ratio or shares the limitation that the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.0 to infinity describes higher risk for the same group. Analyzing natural log transformations of OR and then taking antilogs of the results deals with this problem, just as for RR. The odds ratio may be the least intuitive of the comparative risk effect sizes, but it probably has the best overall statistical properties. This is because OR can be estimated in prospective studies, in studies that randomly sample from exposed and unexposed populations, and in retrospective studies where groups are first formed based on the presence or absence of a disease before their exposure to a putative risk factor is determined […]. Other effect sizes may not be valid in retrospective studies (RR) or in studies without random sampling ([Pearson correlations between dichotomous variables, US]).”

“Sensitivity and specificity are determined by the threshold on a screening test. This means that different thresholds on the same test will generate different sets of sensitivity and specificity values in the same sample. But both sensitivity and specificity are independent of population base rate and sample size. […] Sensitivity and specificity affect predictive value, the proportion of test results that are correct […] In general, predictive values increase as sensitivity and specificity increase. […] Predictive value is also influenced by the base rate (BR), the proportion of all cases with the disorder […] In general, PPV [positive predictive value] decreases and NPV [negative…] increases as BR approaches zero. This means that screening tests tend to be more useful for ruling out rare disorders than correctly predicting their presence. It also means that most positive results may be false positives under low base rate conditions. This is why it is difficult for researchers or social policy makers to screen large populations for rare conditions without many false positives. […] The effect of BR on predictive values is striking but often overlooked, even by professionals […]. One misunderstanding involves confusing sensitivity and specificity, which are invariant to BR, with PPV and NPV, which are not. This means that diagnosticians fail to adjust their estimates of test accuracy for changes in base rates, which exemplifies the base rate fallacy. […] In general, test results have greater impact on changing the pretest odds when the base rate is moderate, neither extremely low (close to 0) nor extremely high (close to 1.0). But if the target disorder is either very rare or very common, only a result from a highly accurate screening test will change things much.”

“The technique of ANCOVA [ANalysis of COVAriance, US] has two more assumptions than ANOVA does. One is homogeneity of regression, which requires equal within-populations unstandardized regression coefficients for predicting outcome from the covariate. In nonexperimental designs where groups differ systematically on the covariate […] the homogeneity of regression assumption is rather likely to be violated. The second assumption is that the covariate is measured without error […] Violation of either assumption may lead to inaccurate results. For example, an unreliable covariate in experimental designs causes loss of statistical power and in nonexperimental designs may also cause inaccurate adjustment of the means […]. In nonexperimental designs where groups differ systematically, these two extra assumptions are especially likely to be violated. An alternative to ANCOVA is propensity score analysis (PSA). It involves the use of logistic regression to estimate the probability for each case of belonging to different groups, such as treatment versus control, in designs without randomization, given the covariate(s). These probabilities are the propensities, and they can be used to match cases from nonequivalent groups.”

August 5, 2017 Posted by | Books, Epidemiology, Papers, Statistics | Leave a comment

A New Classification System for Diabetes: Rationale and Implications of the β-Cell–Centric Classification Schema

When I started writing this post I intended to write a standard diabetes post covering a variety of different papers, but while I was covering one of the papers I intended to include in the post I realized that I felt like I had to cover that paper in a lot of detail, and I figured I might as well make a separate post about it. Here’s a link to the paper: The Time Is Right for a New Classification System for Diabetes: Rationale and Implications of the β-Cell–Centric Classification Schema.

I have frequently discussed the problem of how best to think about and -categorize the various disorders of glucose homeostasis which are currently lumped together into the various discrete diabetes categories, both online and offline, see e.g. the last few paragraphs of this recent post. I have frequently noted in such contexts that simplistic and very large ‘boxes’ like ‘type 1’ and ‘type 2’ leave out a lot of details, and that some of the details that are lost by employing such a categorization scheme might well be treatment-relevant in some contexts. Individualized medicine is however expensive, so I still consider it an open question to which extent valuable information – which is to say, information that could potentially be used cost-effectively in the treatment context – is lost on account of the current diagnostic practices, but information is certainly lost and treatment options potentially neglected. Relatedly, what’s not cost-effective today may well be tomorrow.

As I decided to devote an entire post to this paper, it is of course a must-read if you’re interested in these topics. I have quoted extensively from the paper below:

“The current classification system presents challenges to the diagnosis and treatment of patients with diabetes mellitus (DM), in part due to its conflicting and confounding definitions of type 1 DM, type 2 DM, and latent autoimmune diabetes of adults (LADA). The current schema also lacks a foundation that readily incorporates advances in our understanding of the disease and its treatment. For appropriate and coherent therapy, we propose an alternate classification system. The β-cell–centric classification of DM is a new approach that obviates the inherent and unintended confusions of the current system. The β-cell–centric model presupposes that all DM originates from a final common denominator — the abnormal pancreatic β-cell. It recognizes that interactions between genetically predisposed β-cells with a number of factors, including insulin resistance (IR), susceptibility to environmental influences, and immune dysregulation/inflammation, lead to the range of hyperglycemic phenotypes within the spectrum of DM. Individually or in concert, and often self-perpetuating, these factors contribute to β-cell stress, dysfunction, or loss through at least 11 distinct pathways. Available, yet underutilized, treatments provide rational choices for personalized therapies that target the individual mediating pathways of hyperglycemia at work in any given patient, without the risk of drug-related hypoglycemia or weight gain or imposing further burden on the β-cells.”

“The essential function of a classification system is as a navigation tool that helps direct research, evaluate outcomes, establish guidelines for best practices for prevention and care, and educate on all of the above. Diabetes mellitus (DM) subtypes as currently categorized, however, do not fit into our contemporary understanding of the phenotypes of diabetes (16). The inherent challenges of the current system, together with the limited knowledge that existed at the time of the crafting of the current system, yielded definitions for type 1 DM, type 2 DM, and latent autoimmune diabetes in adults (LADA) that are not distinct and are ambiguous and imprecise.”

“Discovery of the role played by autoimmunity in the pathogenesis of type 1 DM created the assumption that type 1 DM and type 2 DM possess unique etiologies, disease courses, and, consequently, treatment approaches. There exists, however, overlap among even the most “typical” patient cases. Patients presenting with otherwise classic insulin resistance (IR)-associated type 2 DM may display hallmarks of type 1 DM. Similarly, obesity-related IR may be observed in patients presenting with “textbook” type 1 DM (7). The late presentation of type 1 DM provides a particular challenge for the current classification system, in which this subtype of DM is generally termed LADA. Leading diabetes organizations have not arrived at a common definition for LADA (5). There has been little consensus as to whether this phenotype constitutes a form of type 2 DM with early or fast destruction of β-cells, a late manifestation of type 1 DM (8), or a distinct entity with its own genetic footprint (5). Indeed, current parameters are inadequate to clearly distinguish any of the subforms of DM (Fig. 1).

https://i0.wp.com/care.diabetesjournals.org/content/diacare/39/2/179/F1.medium.gif

The use of IR to define type 2 DM similarly needs consideration. The fact that many obese patients with IR do not develop DM indicates that IR is insufficient to cause type 2 DM without predisposing factors that affect β-cell function (9).”

“The current classification schema imposes unintended constraints on individualized medicine. Patients diagnosed with LADA who retain endogenous insulin production may receive “default” insulin therapy as treatment of choice. This decision is guided largely by the categorization of LADA within type 1 DM, despite the capacity for endogenous insulin production. Treatment options that do not pose the risks of hypoglycemia or weight gain might be both useful and preferable for LADA but are typically not considered beyond use in type 2 DM (10). […] We believe that there is little rationale for limiting choice of therapy solely on the current definitions of type 1 DM, type 2 DM, and LADA. We propose that choice of therapy should be based on the particular mediating pathway(s) of hyperglycemia present in each individual patient […] the issue is not “what is LADA” or any clinical presentation of DM under the current system. The issue is the mechanisms and rate of destruction of β-cells at work in all DM. We present a model that provides a more logical approach to classifying DM: the β-cell–centric classification of DM. In this schema, the abnormal β-cell is recognized as the primary defect in DM. The β-cell–centric classification system recognizes the interplay of genetics, IR, environmental factors, and inflammation/immune system on the function and mass of β-cells […]. Importantly, this model is universal for the characterization of DM. The β-cell–centric concept can be applied to DM arising in genetically predisposed β-cells, as well as in strongly genetic IR syndromes, such as the Rabson-Mendenhall syndrome (28), which may exhaust nongenetically predisposed β-cells. Finally, the β-cell–centric classification of all DM supports best practices in the management of DM by identifying mediating pathways of hyperglycemia that are operative in each patient and directing treatment to those specific dysfunctions.”

“A key premise is that the mediating pathways of hyperglycemia are common across prediabetes, type 1 DM, type 2 DM, and other currently defined forms of DM. Accordingly, we believe that the current antidiabetes armamentarium has broader applicability across the spectrum of DM than is currently utilized.

The ideal treatment paradigm would be one that uses the least number of agents possible to target the greatest number of mediating pathways of hyperglycemia operative in the given patient. It is prudent to use agents that will help patients reach target A1C levels without introducing drug-related hypoglycemia or weight gain. Despite the capacity of insulin therapy to manage glucotoxicity, there is a concern for β-cell damage due to IR that has been exacerbated by exogenous insulin-induced hyperinsulinemia and weight gain (41).”

“We propose that the β-cell–centric model is a conceptual framework that could help optimize processes of care for DM. A1C, fasting blood glucose, and postprandial glucose testing remain the basis of DM diagnosis and monitoring. Precision medicine in the treatment of DM could be realized by additional diagnostic testing that could include C-peptide (1), islet cell antibodies or other markers of inflammation (1,65), measures of IR, improved assays for β-cell mass, and markers of environmental damage and by the development of markers for the various mediating pathways of hyperglycemia.

We uphold that there is, and will increasingly be, a place for genotyping in DM standard of care. Pharmacogenomics could help direct patient-level care (6669) and holds the potential to spur on research through the development of DM gene banks for analyzing genetic distinctions between type 1 DM, LADA, type 2 DM, and maturity-onset diabetes of the young. The cost for genotyping has become increasingly affordable.”

“The ideal treatment regimens should not be potentially detrimental to the long-term integrity of the β-cells. Specifically, sulfonylureas and glinides should be ardently avoided. Any benefits associated with sulfonylureas and glinides (including low cost) are not enduring and are far outweighed by their attendant risks (and associated treatment costs) of hypoglycemia and weight gain, high rate of treatment failure and subsequent enhanced requirements for antihyperglycemic management, potential for β-cell exhaustion (42), increased risk of cardiovascular events (74), and potential for increased risk of mortality (75,76). Fortunately, there are a large number of classes now available that do not pose these risks.”

“Newer agents present alternatives to insulin therapy, including in patients with “advanced” type 2 DM with residual insulin production. Insulin therapy induces hypoglycemia, weight gain, and a range of adverse consequences of hyperinsulinemia with both short- and long-term outcomes (77–85). Newer antidiabetes classes may be used to delay insulin therapy in candidate patients with endogenous insulin production (19). […] When insulin therapy is needed, we suggest it be incorporated as add-on therapy rather than as substitution for noninsulin antidiabetes agents. Outcomes research is needed to fully evaluate various combination therapeutic approaches, as well as the potential of newer agents to address drivers of β-cell dysfunction and loss.

The principles of the β-cell–centric model provide a rationale for adjunctive therapy with noninsulin regimens in patients with type 1 DM (7,1216). Thiazolidinedione (TZD) therapy in patients with type 1 DM presenting with IR, for example, is appropriate and can be beneficial (17). Clinical trials in type 1 DM show that incretins (20) or SGLT-2 inhibitors (25,88) as adjunctive therapy to exogenous insulin appear to reduce plasma glucose variability.”

July 24, 2017 Posted by | Diabetes, Medicine, Papers | Leave a comment

A few diabetes papers of interest

i. Cost-Effectiveness of Prevention and Treatment of the Diabetic Foot.

“A risk-based Markov model was developed to simulate the onset and progression of diabetic foot disease in patients with newly diagnosed type 2 diabetes managed with care according to guidelines for their lifetime. Mean survival time, quality of life, foot complications, and costs were the outcome measures assessed. Current care was the reference comparison. Data from Dutch studies on the epidemiology of diabetic foot disease, health care use, and costs, complemented with information from international studies, were used to feed the model.

RESULTS—Compared with current care, guideline-based care resulted in improved life expectancy, gain of quality-adjusted life-years (QALYs), and reduced incidence of foot complications. The lifetime costs of management of the diabetic foot following guideline-based care resulted in a cost per QALY gained of <$25,000, even for levels of preventive foot care as low as 10%. The cost-effectiveness varied sharply, depending on the level of foot ulcer reduction attained.

CONCLUSIONS—Management of the diabetic foot according to guideline-based care improves survival, reduces diabetic foot complications, and is cost-effective and even cost saving compared with standard care.”

I won’t go too deeply into the model setup and the results but some of the data they used to feed the model were actually somewhat interesting in their own right, and I have added some of these data below, along with some of the model results.

“It is estimated that 80% of LEAs [lower extremity amputations] are preceded by foot ulcers. Accordingly, it has been demonstrated that preventing the development of foot ulcers in patients with diabetes reduces the frequency of LEAs by 49–85% (6).”

“An annual ulcer incidence rate of 2.1% and an amputation incidence rate of 0.6% were among the reference country-specific parameters derived from this study and adopted in the model.”

“The health outcomes results of the cohort following standard care were comparable to figures reported for diabetic patients in the Netherlands. […] In the 10,000 patients followed until death, a total of 1,780 ulcer episodes occurred, corresponding to a cumulative ulcer incidence of 17.8% and an annual ulcer incidence of 2.2% (mean annual ulcer incidence for the Netherlands is 2.1%) (17). The number of amputations observed was 362 (250 major and 112 minor), corresponding to a cumulative incidence of 3.6% and an annual incidence of 0.4% (mean annual amputation incidence reported for the Netherlands is 0.6%) (17).”

“Cornerstones of guidelines-based care are intensive glycemic control (IGC) and optimal foot care (OFC). Although health benefits and economic efficiency of intensive blood glucose control (8) and foot care programs (914) have been individually reported, the health and economic outcomes and the cost-effectiveness of both interventions have not been determined. […] OFC according to guidelines includes professional protective foot care, education of patients and staff, regular inspection of the feet, identification of the high-risk patient, treatment of nonulcerative lesions, and a multidisciplinary approach to established foot ulcers. […] All cohorts of patients simulated for the different scenarios of guidelines care resulted in improved life expectancy, QALYs gained, and reduced incidence of foot ulcers and LEA compared with standard care. The largest effects on these outcomes were obtained when patients received IGC + OFC. When comparing the independent health effects of the two guidelines strategies, OFC resulted in a greater reduction in ulcer and amputation rates than IGC. Moreover, patients who received IGC + OFC showed approximately the same LEA incidence as patients who received OFC alone. The LEA decrease obtained was proportional to the level of foot ulcer reduction attained.”

“The mean total lifetime costs of a patient under either of the three guidelines care scenarios ranged from $4,088 to $4,386. For patients receiving IGC + OFC, these costs resulted in <$25,000 per QALY gained (relative to standard care). For patients receiving IGC alone, the ICER [here’s a relevant link – US] obtained was $32,057 per QALY gained, and for those receiving OFC alone, this ICER ranged from $12,169 to $220,100 per QALY gained, depending on the level of ulcer reduction attained. […] Increasing the effectiveness of preventive foot care in patients under OFC and IGC + OFC resulted in more QALYs gained, lower costs, and a more favorable ICER. The results of the simulations for the combined scenario (IGC + OFC) were rather insensitive to changes in utility weights and costing parameters. Similar results were obtained for parameter variations in the other two scenarios (IGC and OFC separately).”

“The results of this study suggest that IGC + OFC reduces foot ulcers and amputations and leads to an improvement in life expectancy. Greater health benefits are obtained with higher levels of foot ulcer prevention. Although care according to guidelines increases health costs, the cost per QALY gained is <$25,000, even for levels of preventive foot care as low as 10%. ICERs of this order are cost-effective according to the stratification of interventions for diabetes recently proposed (32). […] IGC falls into the category of a possibly cost-effective intervention in the management of the diabetic foot. Although it does not produce significant reduction in foot ulcers and LEA, its effectiveness resides in the slowing of neuropathy progression rates.

Extrapolating our results to a practical situation, if IGC + OFC was to be given to all diabetic patients in the Netherlands, with the aim of reducing LEA by 50% (St. Vincent’s declaration), the cost per QALY gained would be $12,165 and the cost for managing diabetic ulcers and amputations would decrease by 53 and 58%, respectively. From a policy perspective, this is clearly cost-effective and cost saving compared with current care.”

ii. Early Glycemic Control, Age at Onset, and Development of Microvascular Complications in Childhood-Onset Type 1 Diabetes.

“The aim of this work was to study the impact of glycemic control (HbA1c) early in disease and age at onset on the occurrence of incipient diabetic nephropathy (MA) and background retinopathy (RP) in childhood-onset type 1 diabetes.

RESEARCH DESIGN AND METHODS—All children, diagnosed at 0–14 years in a geographically defined area in northern Sweden between 1981 and 1992, were identified using the Swedish Childhood Diabetes Registry. From 1981, a nationwide childhood diabetes care program was implemented recommending intensified insulin treatment. HbA1c and urinary albumin excretion were analyzed, and fundus photography was performed regularly. Retrospective data on all 94 patients were retrieved from medical records and laboratory reports.

RESULTS—During the follow-up period, with a mean duration of 12 ± 4 years (range 5–19), 17 patients (18%) developed MA, 45 patients (48%) developed RP, and 52% had either or both complications. A Cox proportional hazard regression, modeling duration to occurrence of MA or RP, showed that glycemic control (reflected by mean HbA1c) during the follow-up was significantly associated with both MA and RP when adjusted for sex, birth weight, age at onset, and tobacco use as potential confounders. Mean HbA1c during the first 5 years of diabetes was a near-significant determinant for development of MA (hazard ratio 1.41, P = 0.083) and a significant determinant of RP (1.32, P = 0.036). The age at onset of diabetes significantly influenced the risk of developing RP (1.11, P = 0.021). Thus, in a Kaplan-Meier analysis, onset of diabetes before the age of 5 years, compared with the age-groups 5–11 and >11 years, showed a longer time to occurrence of RP (P = 0.015), but no clear tendency was seen for MA, perhaps due to lower statistical power.

CONCLUSIONS—Despite modern insulin treatment, >50% of patients with childhood-onset type 1 diabetes developed detectable diabetes complications after ∼12 years of diabetes. Inadequate glycemic control, also during the first 5 years of diabetes, seems to accelerate time to occurrence, whereas a young age at onset of diabetes seems to prolong the time to development of microvascular complications. […] The present study and other studies (15,54) indicate that children with an onset of diabetes before the age of 5 years may have a prolonged time to development of microvascular complications. Thus, the youngest age-groups, who are most sensitive to hypoglycemia with regard to risk of persistent brain damage, may have a relative protection during childhood or a longer time to development of complications.”

It’s important to note that although some people reading the study may think this is all ancient history (people diagnosed in the 80es?), to a lot of people it really isn’t. The study is of great personal interest to me, as I was diagnosed in ’87; if it had been a Danish study rather than a Swedish one I might well have been included in the analysis.

Another note to add in the context of the above coverage is that unlike what the authors of the paper seem to think/imply, hypoglycemia may not be the only relevant variable of interest in the context of the effect of childhood diabetes on brain development, where early diagnosis has been observed to tend to lead to less favourable outcomes – other variables which may be important include DKA episodes and perhaps also chronic hyperglycemia during early childhood. See this post for more stuff on these topics.

Some more stuff from the paper:

“The annual incidence of type 1 diabetes in northern Sweden in children 0–14 years of age is now ∼31/100,000. During the time period 1981–1992, there has been an increase in the annual incidence from 19 to 31/100,000 in northern Sweden. This is similar to the rest of Sweden […]. Seventeen (18%) of the 94 patients fulfilled the criteria for MA during the follow-up period. None of the patients developed overt nephropathy, elevated serum creatinine, or had signs of any other kidney disorder, e.g., hematuria, during the follow-up period. […] The mean time to diagnosis of MA was 9 ± 3 years (range 4–15) from diabetes onset. Forty-five (48%) of the 94 patients fulfilled the criteria for RP during the follow-up period. None of the patients developed proliferative retinopathy or were treated with photocoagulation. The mean time to diagnosis of RP was 11 ± 4 years (range 4–19) from onset of diabetes. Of the 45 patients with RP, 13 (29%) had concomitant MA, and thus 13 (76.5%) of the 17 patients with MA had concomitant RP. […] Altogether, among the 94 patients, 32 (34%) had isolated RP, 4 (4%) had isolated MA, and 13 (14%) had combined RP and MA. Thus, 49 (52%) patients had either one or both complications and, hence, 45 (48%) had neither of these complications.”

“When modeling MA as a function of glycemic level up to the onset of MA or during the entire follow-up period, adjusting for sex, birth weight, age at onset of diabetes, and tobacco use, only glycemic control had a significant effect. An increase in hazard ratio (HR) of 83% per one percentage unit increase in mean HbA1c was seen. […] The increase in HR of developing RP for each percentage unit rise in HbA1c during the entire follow-up period was 43% and in the early period 32%. […] Age at onset of diabetes was a weak but significant independent determinant for the development of RP in all regression models (P = 0.015, P = 0.018, and P = 0.010, respectively). […] Despite that this study was relatively small and had a retrospective design, we were able to show that the glycemic level already during the first 5 years may be an important predictor of later development of both MA and RP. This is in accordance with previous prospective follow-up studies (16,30).”

“Previously, male sex, smoking, and low birth weight have been shown to be risk factors for the development of nephropathy and retinopathy (6,4549). However, in this rather small retrospective study with a limited follow-up time, we could not confirm these associations”. This may just be because of lack of power, it’s a relatively small study. Again, this is/was of personal interest to me; two of those three risk factors apply to me, and neither of those risk factors are modifiable.

iii. Eighteen Years of Fair Glycemic Control Preserves Cardiac Autonomic Function in Type 1 Diabetes.

“Reduced cardiovascular autonomic function is associated with increased mortality in both type 1 and type 2 diabetes (14). Poor glycemic control plays an important role in the development and progression of diabetic cardiac autonomic dysfunction (57). […] Diabetic cardiovascular autonomic neuropathy (CAN) can be defined as impaired function of the peripheral autonomic nervous system. Exercise intolerance, resting tachycardia, and silent myocardial ischemia may be early signs of cardiac autonomic dysfunction (9).The most frequent finding in subclinical and symptomatic CAN is reduced heart rate variability (HRV) (10). […] No other studies have followed type 1 diabetic patients on intensive insulin treatment during ≥14-year periods and documented cardiac autonomic dysfunction. We evaluated the association between 18 years’ mean HbA1c and cardiac autonomic function in a group of type 1 diabetic patients with 30 years of disease duration.”

“A total of 39 patients with type 1 diabetes were followed during 18 years, and HbA1c was measured yearly. At 18 years follow-up heart rate variability (HRV) measurements were used to assess cardiac autonomic function. Standard cardiac autonomic tests during normal breathing, deep breathing, the Valsalva maneuver, and the tilt test were performed. Maximal heart rate increase during exercise electrocardiogram and minimal heart rate during sleep were also used to describe cardiac autonomic function.

RESULTS—We present the results for patients with mean HbA1c <8.4% (two lowest HbA1c tertiles) compared with those with HbA1c ≥8.4% (highest HbA1c tertile). All of the cardiac autonomic tests were significantly different in the high- and the low-HbA1c groups, and the most favorable scores for all tests were seen in the low-HbA1c group. In the low-HbA1c group, the HRV was 40% during deep breathing, and in the high-HbA1c group, the HRV was 19.9% (P = 0.005). Minimal heart rate at night was significantly lower in the low-HbA1c groups than in the high-HbA1c group (P = 0.039). With maximal exercise, the increase in heart rate was significantly higher in the low-HbA1c group compared with the high-HbA1c group (P = 0.001).

CONCLUSIONS—Mean HbA1c during 18 years was associated with cardiac autonomic function. Cardiac autonomic function was preserved with HbA1c <8.4%, whereas cardiac autonomic dysfunction was impaired in the group with HbA1c ≥8.4%. […] The study underlines the importance of good glycemic control and demonstrates that good long-term glycemic control is associated with preserved cardiac autonomic function, whereas a lack of good glycemic control is associated with cardiac autonomic dysfunction.”

These results are from Norway (Oslo), and again they seem relevant to me personally (‘from a statistical point of view’) – I’ve had diabetes for about as long as the people they included in the study.

iv. The Mental Health Comorbidities of Diabetes.

“Individuals living with type 1 or type 2 diabetes are at increased risk for depression, anxiety, and eating disorder diagnoses. Mental health comorbidities of diabetes compromise adherence to treatment and thus increase the risk for serious short- and long-term complications […] Young adults with type 1 diabetes are especially at risk for poor physical and mental health outcomes and premature mortality. […] we summarize the prevalence and consequences of mental health problems for patients with type 1 or type 2 diabetes and suggest strategies for identifying and treating patients with diabetes and mental health comorbidities.”

“Major advances in the past 2 decades have improved understanding of the biological basis for the relationship between depression and diabetes.2 A bidirectional relationship might exist between type 2 diabetes and depression: just as type 2 diabetes increases the risk for onset of major depression, a major depressive disorder signals increased risk for on set of type 2 diabetes.2 Moreover, diabetes distress is now recognized as an entity separate from major depressive disorder.2 Diabetes distress occurs because virtually all of diabetes care involves self-management behavior—requiring balance of a complex set of behavioral tasks by the person and family, 24 hours a day, without “vacation” days. […] Living with diabetes is associated with a broad range of diabetes-related distresses, such as feeling over-whelmed with the diabetes regimen; being concerned about the future and the possibility of serious complications; and feeling guilty when management is going poorly. This disease burden and emotional distress in individuals with type 1 or type 2 diabetes, even at levels of severity below the threshold for a psychiatric diagnosis of depression or anxiety, are associated with poor adherence to treatment, poor glycemic control, higher rates of diabetes complications, and impaired quality of life. […] Depression in the context of diabetes is […] associated with poor self-care with respect to diabetes treatment […] Depression among individuals with diabetes is also associated with increased health care use and expenditures, irrespective of age, sex, race/ethnicity, and health insurance status.3

“Women with type 1 diabetes have a 2-fold increased risk for developing an eating disorder and a 1.9-fold increased risk for developing subthreshold eating disorders than women without diabetes.6 Less is known about eating disorders in boys and men with diabetes. Disturbed eating behaviors in women with type 1 diabetes include binge eating and caloric purging through insulin restriction, with rates of these disturbed eating behaviors reported to occur in 31% to 40% of women with type 1 diabetes aged between 15 and 30 years.6 […] disordered eating behaviors persist and worsen over time. Women with type 1 diabetes and eating disorders have poorer glycemic control, with higher rates of hospitalizations and retinopathy, neuropathy, and premature death compared with similarly aged women with type 1 diabetes without eating disorders.6 […] few diabetes clinics provide mental health screening or integrate mental/behavioral health services in diabetes clinical care.4 It is neither practical nor affordable to use standardized psychiatric diagnostic interviews to diagnose mental health comorbidities in individuals with diabetes. Brief paper-and-pencil self-report measures such as the Beck Depression Inventory […] that screen for depressive symptoms are practical in diabetes clinical settings, but their use remains rare.”

The paper does not mention this, but it is important to note that there are multiple plausible biological pathways which might help to explain bidirectional linkage between depression and type 2 diabetes. Physiological ‘stress’ (think: inflammation) is likely to be an important factor, and so are the typical physiological responses to some of the pharmacological treatments used to treat depression (…as well as other mental health conditions); multiple drugs used in psychiatry, including tricyclic antidepressants, cause weight gain and have proven diabetogenic effects – I’ve covered these topics before here on the blog. I’ve incidentally also covered other topics touched briefly upon in the paper – here’s for example a more comprehensive post about screening for depression in the diabetes context, and here’s a post with some information about how one might go about screening for eating disorders; skin signs are important. I was a bit annoyed that the author of the above paper did not mention this, as observing whether or not Russell’s sign – which is a very reliable indicator of eating disorder – is present or not is easier/cheaper/faster than performing any kind of even semi-valid depression screen.

v. Diabetes, Depression, and Quality of Life. This last one covers topics related to the topics covered in the paper above.

“The study consisted of a representative population sample of individuals aged ≥15 years living in South Australia comprising 3,010 personal interviews conducted by trained health interviewers. The prevalence of depression in those suffering doctor-diagnosed diabetes and comparative effects of diabetic status and depression on quality-of-life dimensions were measured.

RESULTS—The prevalence of depression in the diabetic population was 24% compared with 17% in the nondiabetic population. Those with diabetes and depression experienced an impact with a large effect size on every dimension of the Short Form Health-Related Quality-of-Life Questionnaire (SF-36) as compared with those who suffered diabetes and who were not depressed. A supplementary analysis comparing both depressed diabetic and depressed nondiabetic groups showed there were statistically significant differences in the quality-of-life effects between the two depressed populations in the physical and mental component summaries of the SF-36.

CONCLUSIONS—Depression for those with diabetes is an important comorbidity that requires careful management because of its severe impact on quality of life.”

I felt slightly curious about the setup after having read this, because representative population samples of individuals should not in my opinion yield depression rates of either 17% nor 24%. Rates that high suggest to me that the depression criteria used in the paper are a bit ‘laxer’/more inclusive than what you see in some other contexts when reading this sort of literature – to give an example of what I mean, the depression screening post I link to above noted that clinical or major depression occurred in 11.4% of people with diabetes, compared to a non-diabetic prevalence of 5%. There’s a long way from 11% to 24% and from 5% to 17%. Another potential explanation for such a high depression rate could of course also be some sort of selection bias at the data acquisition stage, but that’s obviously not the case here. However 3000 interviews is a lot of interviews, so let’s read on…

“Several studies have assessed the impact of depression in diabetes in terms of the individual’s functional ability or quality of life (3,4,13). Brown et al. (13) examined preference-based time tradeoff utility values associated with diabetes and showed that those with diabetes were willing to trade a significant proportion of their remaining life in return for a diabetes-free health state.”

“Depression was assessed using the mood module of the Primary Care Evaluation of Mental Disorders questionnaire. This has been validated to provide estimates of mental disorder comparable with those found using structured and longer diagnostic interview schedules (16). The mental disorders examined in the questionnaire included major depressive disorder, dysthymia, minor depressive disorder, and bipolar disorder. [So yes, the depression criteria used in this study are definitely more inclusive than depression criteria including only people with MDD] […] The Short Form Health-Related Quality-of-Life Questionnaire (SF-36) was also included to assess the quality of life of the different population groups with and without diabetes. […] Five groups were examined: the overall population without diabetes and without depression; the overall diabetic population; the depression-only population; the diabetic population without depression; and the diabetic population with depression.”

“Of the population sample, 205 (6.8%) were classified as having major depression, 130 (4.3%) had minor depression, 105 (3.5%) had partial remission of major depression, 79 (2.6%) had dysthymia, and 5 (0.2%) had bipolar disorder (depressed phase). No depressive syndrome was detected in 2,486 (82.6%) respondents. The population point prevalence of doctor-diagnosed diabetes in this survey was 5.2% (95% CI 4.6–6.0). The prevalence of depression in the diabetic population was 23.6% (22.1–25.1) compared with 17.1% (15.8–18.4) in the nondiabetic population. This difference approached statistical significance (P = 0.06). […] There [was] a clear difference in the quality-of-life scores for the diabetic and depression group when compared with the diabetic group without depression […] Overall, the highest quality-of-life scores are experienced by those without diabetes and depression and the lowest by those with diabetes and depression. […] the standard scores of those with no diabetes have quality-of-life status comparable with the population mean or slightly better. At the other extreme those with diabetes and depression experience the most severe comparative impact on quality-of-life for every dimension. Between these two extremes, diabetes overall and the diabetes without depression groups have a moderate-to-severe impact on the physical functioning, role limitations (physical), and general health scales […] The results of the two-factor ANOVA showed that the interaction term was significant only for the PCS [Physical Component Score – US] scale, indicating a greater than additive effect of diabetes and depression on the physical health dimension.”

“[T]here was a significant interaction between diabetes and depression on the PCS but not on the MCS [Mental Component Score. Do note in this context that the no-interaction result is far from certain, because as they observe: “it may simply be sample size that has not allowed us to observe a greater than additive effect in the MCS scale. Although there was no significant interaction between diabetes and depression and the MCS scale, we did observe increases on the effect size for the mental health dimensions”]. One explanation for this finding might be that depression can influence physical outcomes, such as recovery from myocardial infarction, survival with malignancy, and propensity to infection. Various mechanisms have been proposed for this, including changes to the immune system (24). Other possibilities are that depression in diabetes may affect the capacity to maintain medication vigilance, maintain a good diet, and maintain other lifestyle factors, such as smoking and exercise, all of which are likely possible pathways for a greater than additive effect. Whatever the mechanism involved, these data indicate that the addition of depression to diabetes has a severe impact on quality of life, and this needs to be managed in clinical practice.”

May 25, 2017 Posted by | Cardiology, Diabetes, Health Economics, Medicine, Nephrology, Neurology, Ophthalmology, Papers, Personal, Pharmacology, Psychiatry, Psychology | Leave a comment

Extraordinary Physics with Millisecond Pulsars

A few related links:
Nanograv.org.
Millisecond pulsar.
PSR J0348+0432.
Pulsar timing array.
Detection of Gravitational Waves using Pulsar Timing (paper).
The strong equivalence principle.
European Pulsar Timing Array.
Parkes Observatory.
Gravitational wave.
Gravitational waves from binary supermassive black holes missing in pulsar observations (paper – it’s been a long time since I watched the lecture, but in my bookmarks I noted that some of the stuff included in this publication was covered in the lecture).

May 24, 2017 Posted by | Astronomy, Lectures, Papers, Physics | Leave a comment

Standing on the Shoulders of Mice: Aging T-cells

 

Most of the lecture is not about mice, but rather about stuff like this and this (both papers are covered in the lecture). I’ve read about related topics before (see e.g this), but if you haven’t some parts of the lecture will probably be too technical for you to follow.

May 3, 2017 Posted by | Cancer/oncology, Cardiology, Genetics, Immunology, Lectures, Medicine, Molecular biology, Papers | Leave a comment

A few diabetes papers of interest

1. Cognitive Dysfunction in Older Adults With Diabetes: What a Clinician Needs to Know. I’ve talked about these topics before here on the blog (see e.g. these posts on related topics), but this is a good summary article. I have added some observations from the paper below:

“Although cognitive dysfunction is associated with both type 1 and type 2 diabetes, there are several distinct differences observed in the domains of cognition affected in patients with these two types. Patients with type 1 diabetes are more likely to have diminished mental flexibility and slowing of mental speed, whereas learning and memory are largely not affected (8). Patients with type 2 diabetes show decline in executive function, memory, learning, attention, and psychomotor efficiency (9,10).”

“So far, it seems that the risk of cognitive dysfunction in type 2 diabetes may be influenced by glycemic control, hypoglycemia, inflammation, depression, and macro- and microvascular pathology (14). The cumulative impact of these conditions on the vascular etiology may further decrease the threshold at which cognition is affected by other neurological conditions in the aging brain. In patients with type 1 diabetes, it seems as though diabetes has a lesser impact on cognitive dysfunction than those patients with type 2 diabetes. […] Thus, the cognitive decline in patients with type 1 diabetes may be mild and may not interfere with their functionality until later years, when other aging-related factors become important. […] However, recent studies have shown a higher prevalence of cognitive dysfunction in older patients (>60 years of age) with type 1 diabetes (5).”

“Unlike other chronic diseases, diabetes self-care involves many behaviors that require various degrees of cognitive pliability and insight to perform proper self-care coordination and planning. Glucose monitoring, medications and/or insulin injections, pattern management, and diet and exercise timing require participation from different domains of cognitive function. In addition, the recognition, treatment, and prevention of hypoglycemia, which are critical for the older population, also depend in large part on having intact cognition.

The reason a clinician needs to recognize different domains of cognition affected in patients with diabetes is to understand which self-care behavior will be affected in that individual. […] For example, a patient with memory problems may forget to take insulin doses, forget to take medications/insulin on time, or forget to eat on time. […] Cognitively impaired patients using insulin are more likely to not know what to do in the event of low blood glucose or how to manage medication on sick days (34). Patients with diminished mental flexibility and processing speed may do well with a simple regimen but may fail if the regimen is too complex. In general, older patients with diabetes with cognitive dysfunction are less likely to be involved in diabetes self-care and glucose monitoring compared with age-matched control subjects (35). […] Other comorbidities associated with aging and diabetes also add to the burden of cognitive impairment and its impact on self-care abilities. For example, depression is associated with a greater decline in cognitive function in patients with type 2 diabetes (36). Depression also can independently negatively impact the motivation to practice self-care.”

“Recently, there is an increasing discomfort with the use of A1C as a sole parameter to define glycemic goals in the older population. Studies have shown that A1C values in the older population may not reflect the same estimated mean glucose as in the younger population. Possible reasons for this discrepancy are the commonly present comorbidities that impact red cell life span (e.g., anemia, uremia, renal dysfunction, blood transfusion, erythropoietin therapy) (45,46). In addition, A1C level does not reflect glucose excursions and variability. […] Thus, it is prudent to avoid A1C as the sole measure of glycemic goal in this population. […] In patients who need insulin therapy, simplification, also known as de-intensification of the regimen, is generally recommended in all frail patients, especially if they have cognitive dysfunction (37,49). However, the practice has not caught up with the recommendations as shown by large observational studies showing unnecessary intensive control in patients with diabetes and dementia (50–52).”

“With advances in the past few decades, we now see a larger number of patients with type 1 diabetes who are aging successfully and facing the new challenges that aging brings. […] Patients with type 1 diabetes are typically proactive in their disease management and highly disciplined. Cognitive dysfunction in these patients creates significant distress for the first time in their lives; they suddenly feel a “lack of control” over the disease they have managed for many decades. The addition of autonomic dysfunction, gastropathy, or neuropathy may result in wider glucose excursions. These patients are usually more afraid of hyperglycemia than hypoglycemia — both of which they have managed for many years. However, cognitive dysfunction in older adults with type 1 diabetes has been found to be associated with hypoglycemic unawareness and glucose variability (5), which in turn increases the risk of severe hypoglycemia (54). The need for goal changes to avoid hypoglycemia and accept some hyperglycemia can be very difficult for many of these patients.”

2. Trends in Drug Utilization, Glycemic Control, and Rates of Severe Hypoglycemia, 2006–2013.

“From 2006 to 2013, use increased for metformin (from 47.6 to 53.5%), dipeptidyl peptidase 4 inhibitors (0.5 to 14.9%), and insulin (17.1 to 23.0%) but declined for sulfonylureas (38.8 to 30.8%) and thiazolidinediones (28.5 to 5.6%; all P < 0.001). […] The overall rate of severe hypoglycemia remained the same (1.3 per 100 person-years; P = 0.72), declined modestly among the oldest patients (from 2.9 to 2.3; P < 0.001), and remained high among those with two or more comorbidities (3.2 to 3.5; P = 0.36). […] During the recent 8-year period, the use of glucose-lowering drugs has changed dramatically among patients with T2DM. […] The use of older classes of medications, such as sulfonylureas and thiazolidinediones, declined. During this time, glycemic control of T2DM did not improve in the overall population and remained poor among nearly a quarter of the youngest patients. Rates of severe hypoglycemia remained largely unchanged, with the oldest patients and those with multiple comorbidities at highest risk. These findings raise questions about the value of the observed shifts in drug utilization toward newer and costlier medications.”

“Our findings are consistent with a prior study of drug prescribing in U.S. ambulatory practice conducted from 1997 to 2012 (2). In that study, similar increases in DPP-4 inhibitor and insulin analog prescribing were observed; these changes were accompanied by a 61% increase in drug expenditures (2). Our study extends these findings to drug utilization and demonstrates that these increases occurred in all age and comorbidity subgroups. […] In contrast, metformin use increased only modestly between 2006 and 2013 and remained relatively low among older patients and those with two or more comorbidities. Although metformin is recommended as first-line therapy (26), it may be underutilized as the initial agent for the treatment of T2DM (27). Its use may be additionally limited by coexisting contraindications, such as chronic kidney disease (28).”

“The proportion of patients with a diagnosis of diabetes who did not fill any glucose-lowering medications declined slightly (25.7 to 24.1%; P < 0.001).”

That is, one in four people who had a diagnosis of type 2 diabetes were not taking any prescription drugs for their health condition. I wonder how many of those people have read wikipedia articles like this one

“When considering treatment complexity, the use of oral monotherapy increased slightly (from 24.3 to 26.4%) and the use of multiple (two or more) oral agents declined (from 33.0 to 26.5%), whereas the use of insulin alone and in combination with oral agents increased (from 6.0 to 8.5% and from 11.1 to 14.6%, respectively; all P values <0.001).”

“Between 1987 and 2011, per person medical spending attributable to diabetes doubled (4). More than half of the increase was due to prescription drug spending (4). Despite these spending increases and greater utilization of newly developed medications, we showed no concurrent improvements in overall glycemic control or the rates of severe hypoglycemia in our study. Although the use of newer and more expensive agents may have other important benefits (44), further studies are needed to define the value and cost-effectiveness of current treatment options.”

iii. Among Low-Income Respondents With Diabetes, High-Deductible Versus No-Deductible Insurance Sharply Reduces Medical Service Use.

“Using the 2011–2013 Medical Expenditure Panel Survey, bivariate and regression analyses were conducted to compare demographic characteristics, medical service use, diabetes care, and health status among privately insured adult respondents with diabetes, aged 18–64 years (N = 1,461) by lower (<200% of the federal poverty level) and higher (≥200% of the federal poverty level) income and deductible vs. no deductible (ND), low deductible ($1,000/$2,400) (LD), and high deductible (>$1,000/$2,400) (HD). The National Health Interview Survey 2012–2014 was used to analyze differences in medical debt and delayed/avoided needed care among adult respondents with diabetes (n = 4,058) by income. […] Compared with privately insured respondents with diabetes with ND, privately insured lower-income respondents with diabetes with an LD report significant decreases in service use for primary care, checkups, and specialty visits (27%, 39%, and 77% lower, respectively), and respondents with an HD decrease use by 42%, 65%, and 86%, respectively. Higher-income respondents with an LD report significant decreases in specialty (28%) and emergency department (37%) visits.”

“The reduction in ambulatory visits made by lower-income respondents with ND compared with lower-income respondents with an LD or HD is far greater than for higher-income patients. […] The substantial reduction in checkup (preventive) and specialty visits by those with a lower income who have an HDHP [high-deductible health plan, US] implies a very different pattern of service use compared with lower-income respondents who have ND and with higher-income respondents. Though preventive visits require no out-of-pocket costs, reduced preventive service use with HDHPs is well established and might be the result of patients being unaware of this benefit or their concern about findings that could lead to additional expenses (31). Such sharply reduced service use by low-income respondents with diabetes may not be desirable. Patients with diabetes benefit from assessment of diabetes control, encouragement and reinforcement of behavior change and medication use, and early detection and treatment of diabetes complications or concomitant disease.”

iv. Long-term Mortality and End-Stage Renal Disease in a Type 1 Diabetes Population Diagnosed at Age 15–29 Years in Norway.

OBJECTIVE To study long-term mortality, causes of death, and end-stage renal disease (ESRD) in people diagnosed with type 1 diabetes at age 15–29 years.

RESEARCH DESIGN AND METHODS This nationwide, population-based cohort with type 1 diabetes diagnosed during 1978–1982 (n = 719) was followed from diagnosis until death, emigration, or September 2013. Linkages to the Norwegian Cause of Death Registry and the Norwegian Renal Registry provided information on causes of death and whether ESRD was present.

RESULTS During 30 years’ follow-up, 4.6% of participants developed ESRD and 20.6% (n = 148; 106 men and 42 women) died. Cumulative mortality by years since diagnosis was 6.0% (95% CI 4.5–8.0) at 10 years, 12.2% (10.0–14.8) at 20 years, and 18.4% (15.8–21.5) at 30 years. The SMR [standardized mortality ratio] was 4.4 (95% CI 3.7–5.1). Mean time from diagnosis of diabetes to ESRD was 23.6 years (range 14.2–33.5). Death was caused by chronic complications (32.2%), acute complications (20.5%), violent death (19.9%), or any other cause (27.4%). Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).

CONCLUSIONS The cumulative incidence of ESRD was low in this cohort with type 1 diabetes followed for 30 years. Mortality was 4.4 times that of the general population, and more than 50% of all deaths were caused by acute or chronic complications. A relatively high proportion of deaths were related to alcohol.”

Some additional observations from the paper:

“Studies assessing causes of death in type 1 diabetes are most frequently conducted in individuals diagnosed during childhood (17) or without evaluating the effect of age at diagnosis (8,9). Reports on causes of death in cohorts of patients diagnosed during late adolescence or young adulthood, with long-term follow-up, are less frequent (10). […] Adherence to treatment during this age is poor and the risk of acute diabetic complications is high (1316). Mortality may differ between those with diabetes diagnosed during this period of life and those diagnosed during childhood.”

“Mortality was between four and five times higher than in the general population […]. The excess mortality was similar for men […] and women […]. SMR was higher in the lower age bands — 6.7 (95% CI 3.9–11.5) at 15–24 years and 7.3 (95% CI 5.2–10.1) at 25–34 years — compared with the higher age bands: 3.7 (95% CI 2.7–4.9) at 45–54 years and 3.9 (95% CI 2.6–5.8) at 55–65 years […]. The Cox regression model showed that the risk of death increased significantly by age at diagnosis (HR 1.1; 95% CI 1.1–1.2; P < 0.001) and was eight to nine times higher if ESRD was present (HR 8.7; 95% CI 4.8–15.5; P < 0.0001). […] the underlying cause of death was diabetes in 57 individuals (39.0%), circulatory in 22 (15.1%), cancer in 18 (12.3%), accidents or intoxications in 20 (13.7%), suicide in 8 (5.5%), and any other cause in 21 (14.4%) […] In addition, diabetes contributed to death in 29.5% (n = 43) and CVD contributed to death in 10.9% (n = 29) of the 146 cases. Diabetes was mentioned on the death certificate for 68.2% of the cohort but for only 30.0% of the violent deaths. […] In 60% (88/146) of the cases the review committee considered death to be related to diabetes, whereas in 40% (58/146) the cause was unrelated to diabetes or had an unknown relation to diabetes. According to the clinical committee, acute complications caused death in 20.5% (30/146) of the cases; 20 individuals died as a result of DKA and 10 from hypoglycemia. […] Chronic complications caused the largest proportion of deaths (47/146; 32.2%) and increased with increasing duration of diabetes […]. Among individuals dying as a result of chronic complications (n = 47), CVD caused death in 94% (n = 44) and renal failure in 6% (n = 3). ESRD contributed to death in 22.7% (10/44) of those dying from CVD. Cardiovascular death occurred at mortality rates seven times higher than those in the general population […]. ESRD caused or contributed to death in 13 of 14 cases, when present.”

“Violence (intoxications, accidents, and suicides) was the leading cause of death before 10 years’ duration of diabetes; thereafter it was only a minor cause […] Insulin was used in two of seven suicides. […] According to the available medical records and autopsy reports, about 20% (29/146) of the deceased misused alcohol. In 15% (22/146) alcohol-related ICD-10 codes were listed on the death certificate (18% [19/106] of men and 8% [3/40] of women). In 10 cases the cause of death was uncertain but considered to be related to alcohol or diabetes […] The SMR for alcohol-related death was high when considering the underlying cause of death (5.0; 95% CI 2.5–10.0), and even higher when considering all alcohol-related ICD-10 codes listed on the death certificate (6.8; 95% CI 4.5–10.3). The cause of death was associated with alcohol in 21.8% (19/87) of those who died with less than 20 years’ diabetes duration. Drug abuse was noted on the death certificate in only two cases.”

“During follow-up, 33 individuals (4.6%; 22 men and 11 women) developed ESRD as a result of diabetic nephropathy. Mean time from diagnosis of diabetes to ESRD was 23.6 years (range 14.2–33.5 years). Cumulative incidence of ESRD by years since diagnosis of diabetes was 1.4% (95% CI 0.7–2.7) at 20 years and 4.8% (95% CI 3.4–6.9) at 30 years.”

“This study highlights three important findings. First, among individuals who were diagnosed with type 1 diabetes in late adolescence and early adulthood and had good access to health care, and who were followed for 30 years, mortality was four to five times that of the general population. Second, 15% of all deaths were associated with alcohol, and the SMR for alcohol-related deaths was 6.8. Third, there was a relatively low cumulative incidence of ESRD (4.8%) 30 years after the diagnosis of diabetes.

We report mortality higher than those from a large, population-based study from Finland that found cumulative mortality around 6% at 20 years’ and 15% at 30 years’ duration of diabetes among a population with age at onset and year of diagnosis similar to those in our cohort (10). The corresponding numbers in our cohort were 12% and 18%, respectively; the discrepancy was particularly high at 20 years. The SMR in the Finnish cohort was lower than that in our cohort (2.6–3.0 vs. 3.7–5.1), and those authors reported the SMR to be lower in late-onset diabetes (at age 15–29 years) compared with early-onset diabetes (at age 23). The differences between the Norwegian and Finnish data are difficult to explain since both reports are from countries with good access to health care and a high incidence of type 1 diabetes.”

However the reason for the somewhat different SMRs in these two reasonably similar countries may actually be quite simple – the important variable may be alcohol:

“Finland and Norway are appropriate to compare because they share important population and welfare characteristics. There are, however, significant differences in drinking levels and alcohol-related mortality: the Finnish population consumes more alcohol and the Norwegian population consumes less. The mortality rates for deaths related to alcohol are about three to four times higher in Finland than in Norway (30). […] The markedly higher SMR in our cohort can probably be explained by the lower mortality rates for alcohol-related mortality in the general population. […] In conclusion, the high mortality reported in this cohort with an onset of diabetes in late adolescence and young adulthood draws attention to people diagnosed during a vulnerable period of life. Both acute and chronic complications cause substantial premature mortality […] Our study suggests that increased awareness of alcohol-related death should be encouraged in clinics providing health care to this group of patients.”

April 23, 2017 Posted by | Diabetes, Economics, Epidemiology, Health Economics, Medicine, Nephrology, Neurology, Papers, Pharmacology, Psychology | Leave a comment

A few autism papers

i. The anterior insula in autism: Under-connected and under-examined.

“While the past decade has witnessed a proliferation of neuroimaging studies of autism, theoretical approaches for understanding systems-level brain abnormalities remain poorly developed. We propose a novel anterior insula-based systems-level model for investigating the neural basis of autism, synthesizing recent advances in brain network functional connectivity with converging evidence from neuroimaging studies in autism. The anterior insula is involved in interoceptive, affective and empathic processes, and emerging evidence suggests it is part of a “salience network” integrating external sensory stimuli with internal states. Network analysis indicates that the anterior insula is uniquely positioned as a hub mediating interactions between large-scale networks involved in externally- and internally-oriented cognitive processing. A recent meta-analysis identifies the anterior insula as a consistent locus of hypoactivity in autism. We suggest that dysfunctional anterior insula connectivity plays an important role in autism. […]

Increasing evidence for abnormal brain connectivity in autism comes from studies using functional connectivity measures […] These findings support the hypothesis that under-connectivity between specific brain regions is a characteristic feature of ASD. To date, however, few studies have examined functional connectivity within and between key large-scale canonical brain networks in autism […] The majority of published studies to date have examined connectivity of specific individual brain regions, without a broader theoretically driven systems-level approach.

We propose that a systems-level approach is critical for understanding the neurobiology of autism, and that the anterior insula is a key node in coordinating brain network interactions, due to its unique anatomy, location, function, and connectivity.”

ii. Romantic Relationships and Relationship Satisfaction Among Adults With Asperger Syndrome and High‐Functioning Autism.

“Participants, 31 recruited via an outpatient clinic and 198 via an online survey, were asked to answer a number of self-report questionnaires. The total sample comprised 229 high-functioning adults with ASD (40% males, average age: 35 years). […] Of the total sample, 73% indicated romantic relationship experience and only 7% had no desire to be in a romantic relationship. ASD individuals whose partner was also on the autism spectrum were significantly more satisfied with their relationship than those with neurotypical partners. Severity of autism, schizoid symptoms, empathy skills, and need for social support were not correlated with relationship status. […] Our findings indicate that the vast majority of high-functioning adults with ASD are interested in romantic relationships.”

Those results are very different from other results in the field – for example: “[a] meta-analysis of follow-up studies examining outcomes of ASD individuals revealed that, [o]n average only 14% of the individuals included in the reviewed studies were married or ha[d] a long-term, intimate relationship (Howlin, 2012)” – and one major reason is that they only include high-functioning autistics. I feel sort of iffy about the validity of the selection method used for procuring the online sample, this may also be a major factor (almost one third of them had a university degree so this is definitely not a random sample of high-functioning autistics; ‘high-functioning’ autistics are not that high-functioning in the general setting. Also, the sex ratio is very skewed as 60% of the participants in the study were female. A sex ratio like that may not sound like a big problem, but it is a major problem because a substantial majority of individuals with mild autism are males. Whereas the sex ratio is almost equal in the context of syndromic ASD, non-syndromic ASD is much more prevalent in males, with sex ratios approaching 1:7 in milder cases (link). These people are definitely looking at the milder cases, which means that a sample which skews female will not be remotely similar to most random samples of such individuals taken in the community setting. And this matters because females do better than males. A discussion can be had about to which extent women are under-diagnosed, but I have not seen data convincing me this is a major problem. It’s important to keep in mind in that context that the autism diagnosis is not based on phenotype alone, but on a phenotype-environment interaction; if you have what might be termed ‘an autistic phenotype’ but you are not suffering any significant ill effects as a result of this because you’re able to compensate relatively well (i.e. you are able to handle ‘the environment’ reasonably well despite the neurological makeup you’ve ended up with), you should not get an autism diagnosis – a diagnostic requirement is ‘clinically significant impairment in functioning’.

Anyway some more related data from the publication:

“Studies that analyze outcomes exclusively for ASD adults without intellectual impairment are rare. […] Engström, Ekström, and Emilsson (2003) recruited previous patients with an ASD diagnosis from four psychiatric clinics in Sweden. They reported that 5 (31%) of 16 adults with ASD had ”some form of relation with a partner.” Hofvander et al. (2009) analyzed data from 122 participants who had been referred to outpatient clinics for autism diagnosis. They found that 19 (16%) of all participants had lived in a long-term relationship.
Renty and Roeyers (2006) […] reported that at the time of the[ir] study 19% of 58 ASD adults had a romantic relationship and 8.6% were married or living with a partner. Cederlund, Hagberg, Billstedt, Gillberg, and Gillberg (2008) conducted a follow-up study of male individuals (aged 16–36 years) who had been diagnosed with Asperger syndrome at least 5 years before. […] at the time of the study, three (4%) [out of 76 male ASD individuals] of them were living in a long-term romantic relationship and 10 (13%) had had romantic relationships in the past.”

A few more data and observations from the study:

“A total of 166 (73%) of the 229 participants endorsed currently being in a romantic relationship or having a history of being in a relationship; 100 (44%) reported current involvement in a romantic relationship; 66 (29%) endorsed that they were currently single but have a history of involvement in a romantic relationship; and 63 (27%) participants did not have any experience with romantic relationships. […] Participants without any romantic relationship experience were significantly more likely to be male […] According to participants’ self-report, one fifth (20%) of the 100 participants who were currently involved in a romantic relationship were with an ASD partner. […] Of the participants who were currently single, 65% said that contact with another person was too exhausting for them, 61% were afraid that they would not be able to fulfil the expectations of a romantic partner, and 57% said that they did not know how they could find and get involved with a partner; and 50% stated that they did not know how a romantic relationship works or how they would be expected to behave in a romantic relationship”

“[P]revious studies that exclusively examined adults with ASD without intellectual impairment reported lower levels of romantic relationship experience than the current study, with numbers varying between 16% and 31% […] The results of our study can be best compared with the results of Hofvander et al. (2009) and Renty and Roeyers (2006): They selected their samples […] using methods that are comparable to ours. Hofvander et al. (2009) found that 16% of their participants have had romantic relationship experience in the past, compared to 29% in our sample; and Renty and Roeyers (2006) report that 28% of their participants were either married or engaged in a romantic relationship at the time of their study, compared to 44% in our study. […] Compared to typically developed individuals the percentage of ASD individuals with a romantic relationship partner is relatively low (Weimann, 2010). In the group aged 27–59 years, 68% of German males live together with a partner, 27% are single, and 5% still live with their parents. In the same age group, 73% of all females live with a partner, 26% live on their own, and 2% still live with their parents.”

“As our results show, it is not the case that male ASD individuals do not feel a need for romantic relationships. In fact, the contrary is true. Single males had a greater desire to be in a romantic relationship than single females, and males were more distressed than females about not being in a romantic relationship.” (…maybe in part because the females who were single were more likely than the males who were single to be single by choice?)

“Our findings showed that being with a partner who also has an ASD diagnosis makes a romantic relationship more satisfying for ASD individuals. None of the participants, who had been with a partner in the past but then separated, had been together with an ASD partner. This might indicate that once a person with ASD has found a partner who is also on the spectrum, a relationship might be very stable and long lasting.”

Reward Processing in Autism.

“The social motivation hypothesis of autism posits that infants with autism do not experience social stimuli as rewarding, thereby leading to a cascade of potentially negative consequences for later development. […] Here we use functional magnetic resonance imaging to examine social and monetary rewarded implicit learning in children with and without autism spectrum disorders (ASD). Sixteen males with ASD and sixteen age- and IQ-matched typically developing (TD) males were scanned while performing two versions of a rewarded implicit learning task. In addition to examining responses to reward, we investigated the neural circuitry supporting rewarded learning and the relationship between these factors and social development. We found diminished neural responses to both social and monetary rewards in ASD, with a pronounced reduction in response to social rewards (SR). […] Moreover, we show a relationship between ventral striatum activity and social reciprocity in TD children. Together, these data support the hypothesis that children with ASD have diminished neural responses to SR, and that this deficit relates to social learning impairments. […] When we examined the general neural response to monetary and social reward events, we discovered that only TD children showed VS [ventral striatum] activity for both reward types, whereas ASD children did not demonstrate a significant response to either monetary or SR. However, significant between-group differences were shown only for SR, suggesting that children with ASD may be specifically impaired on processing SR.”

I’m not quite sure I buy that the methodology captures what it is supposed to capture (“The SR feedback consisted of a picture of a smiling woman with the words “That’s Right!” in green text for correct trials and a picture of the same woman with a sad face along with the words “That’s Wrong” in red text for incorrect trials”) (this is supposed to be the ‘social reward feedback’), but on the other hand: “The chosen reward stimuli, faces and coins, are consistent with those used in previous studies of reward processing” (so either multiple studies are of dubious quality, or this kind of method actually ‘works’ – but I don’t know enough about the field to tell which of the two conclusions apply).

iv. The Social Motivation Theory of Autism.

“The idea that social motivation deficits play a central role in Autism Spectrum Disorders (ASD) has recently gained increased interest. This constitutes a shift in autism research, which has traditionally focused more intensely on cognitive impairments, such as Theory of Mind deficits or executive dysfunction, while granting comparatively less attention to motivational factors. This review delineates the concept of social motivation and capitalizes on recent findings in several research areas to provide an integrated picture of social motivation at the behavioral, biological and evolutionary levels. We conclude that ASD can be construed as an extreme case of diminished social motivation and, as such, provides a powerful model to understand humans’ intrinsic drive to seek acceptance and avoid rejection.”

v. Stalking, and Social and Romantic Functioning Among Adolescents and Adults with Autism Spectrum Disorder.

“We examine the nature and predictors of social and romantic functioning in adolescents and adults with ASD. Parental reports were obtained for 25 ASD adolescents and adults (13-36 years), and 38 typical adolescents and adults (13-30 years). The ASD group relied less upon peers and friends for social (OR = 52.16, p < .01) and romantic learning (OR = 38.25, p < .01). Individuals with ASD were more likely to engage in inappropriate courting behaviours (χ2 df = 19 = 3168.74, p < .001) and were more likely to focus their attention upon celebrities, strangers, colleagues, and ex-partners (χ2 df = 5 =2335.40, p < .001), and to pursue their target longer than controls (t = -2.23, df = 18.79, p < .05).”

“Examination of relationships the individuals were reported to have had with the target of their social or romantic interest, indicated that ASD adolescents and adults sought to initiate fewer social and romantic relationships but across a wider variety of people, such as strangers, colleagues, acquaintances, friends, ex-partners, and celebrities. […] typically developing peers […] were more likely to target colleagues, acquaintances, friends, and ex-partners in their relationship attempts, whilst the ASD group targeted these less frequently than expected, and attempted to initiate relationships significantly more frequently than is typical, with strangers and celebrities. […] In attempting to pursue and initiate social and romantic relationships, the ASD group were reported to display a much wider variety of courtship behaviours than the typical group. […] ASD adolescents and adults were more likely to touch the person of interest inappropriately, believe that the target must reciprocate their feelings, show obsessional interest, make inappropriate comments, monitor the person’s activities, follow them, pursue them in a threatening manner, make threats against the person, and threaten self-harm. ASD individuals displayed the majority of the behaviours indiscriminately across all types of targets. […] ASD adolescents and adults were also found […] to persist in their relationship pursuits for significantly longer periods of time than typical adolescents and adults when they received a negative or no response from the person or their family.”

April 4, 2017 Posted by | autism, Neurology, Papers, Psychology | Leave a comment

Information complexity and applications

I have previously here on the blog posted multiple lectures in my ‘lecture-posts’, or I have combined a lecture with other stuff (e.g. links such as those in the previous ‘random stuff’ post). I think such approaches have made me less likely to post lectures on the blog (if I don’t post a lecture soon after I’ve watched it, my experience tells me that I not infrequently simply never get around to posting it), and combined with this issue is also the issue that I don’t really watch a lot of lectures these days. For these reasons I have decided to start posting single lecture posts here on the blog; when I start thinking about the time expenditure of people reading along here in a way this approach actually also seems justified – although it might take me as much time/work to watch and cover, say, 4 lectures as it would take me to read and cover 100 pages of a textbook, the time expenditure required by a reader of the blog would be very different in those two cases (you’ll usually be able to read a post that took me multiple hours to write in a short amount of time, whereas ‘the time advantage’ of the reader is close to negligible (maybe not completely; search costs are not completely irrelevant) in the case of lectures). By posting multiple lectures in the same post I probably decrease the expected value of the time readers spend watching the content I upload, which seems suboptimal.

Here’s the youtube description of the lecture, which was posted a few days ago on the IAS youtube account:

“Over the past two decades, information theory has reemerged within computational complexity theory as a mathematical tool for obtaining unconditional lower bounds in a number of models, including streaming algorithms, data structures, and communication complexity. Many of these applications can be systematized and extended via the study of information complexity – which treats information revealed or transmitted as the resource to be conserved. In this overview talk we will discuss the two-party information complexity and its properties – and the interactive analogues of classical source coding theorems. We will then discuss applications to exact communication complexity bounds, hardness amplification, and quantum communication complexity.”

He actually decided to skip the quantum communication complexity stuff because of the time constraint. I should note that the lecture was ‘easy enough’ for me to follow most of it, so it is not really that difficult, at least not if you know some basic information theory.

A few links to related stuff (you can take these links as indications of what sort of stuff the lecture is about/discusses, if you’re on the fence about whether or not to watch it):
Computational complexity theory.
Shannon entropy.
Shannon’s source coding theorem.
Communication complexity.
Communications protocol.
Information-based complexity.
Hash function.
From Information to Exact Communication (in the lecture he discusses some aspects covered in this paper).
Unique games conjecture (Two-prover proof systems).
A Counterexample to Strong Parallel Repetition (another paper mentioned/briefly discussed during the lecture).
Pinsker’s inequality.

An interesting aspect I once again noted during this lecture is the sort of loose linkage you sometimes observe between the topics of game theory/microeconomics and computer science. Of course the link is made explicit a few minutes later in the talk when he discusses the unique games conjecture to which I link above, but it’s perhaps worth noting that the link is on display even before that point is reached. Around 38 minutes into the lecture he mentions that one of the relevant proofs ‘involves such things as Lagrange multipliers and optimization’. I was far from surprised, as from a certain point of view the problem he discusses at that point is conceptually very similar to some problems encountered in auction theory, where Lagrange multipliers and optimization problems are frequently encountered… If you are too unfamiliar with that field to realize how the similar problem might appear in an auction theory context, what you have there are instead auction partipants who prefer not to reveal their true willingness to pay; and some auction designs actually work in a very similar manner as does the (pseudo-)protocol described in the lecture, and are thus used to reveal it (for some subset of participants at least)).

March 12, 2017 Posted by | Computer science, Game theory, Lectures, Papers | Leave a comment

Random stuff

It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.

i. Autism Spectrum Disorder Grown Up: A Chart Review of Adult Functioning.

“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”

“[W]e evaluated the adult outcomes for 45 individuals diagnosed with ASD prior to age 18, and compared this with the functioning of 35 patients whose ASD was identified after 18 years. Concurrent mental illnesses were noted for both groups. […] Comparison of adult outcome within the group of subjects diagnosed with ASD prior to 18 years of age showed significantly poorer functioning for those with co-morbid Intellectual Disability, except in the domain of establishing intimate relationships [my emphasis. To make this point completely clear, one way to look at these results is that apparently in the domain of partner-search autistics diagnosed during childhood are doing so badly in general that being intellectually disabled on top of being autistic is apparently conferring no additional disadvantage]. Even in the normal IQ group, the mean total score, i.e. the sum of the 5 domains, was relatively low at 12.1 out of a possible 25. […] Those diagnosed as adults had achieved significantly more in the domains of education and independence […] Some authors have described a subgroup of 15–27% of adult ASD patients who attained more positive outcomes […]. Defining an arbitrary adaptive score of 20/25 as “Good” for our normal IQ patients, 8 of thirty four (25%) of those diagnosed as adults achieved this level. Only 5 of the thirty three (15%) diagnosed in childhood made the cutoff. (The cut off was consistent with a well, but not superlatively, functioning member of society […]). None of the Intellectually Disabled ASD subjects scored above 10. […] All three groups had a high rate of co-morbid psychiatric illnesses. Depression was particularly frequent in those diagnosed as adults, consistent with other reports […]. Anxiety disorders were also prevalent in the higher functioning participants, 25–27%. […] Most of the higher functioning ASD individuals, whether diagnosed before or after 18 years of age, were functioning well below the potential implied by their normal range intellect.”

Related papers: Social Outcomes in Mid- to Later Adulthood Among Individuals Diagnosed With Autism and Average Nonverbal IQ as Children, Adults With Autism Spectrum Disorders.

ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:

“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.

“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”

iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:

Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”

Some other related links below:

The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”

Adhesive Capsulitis (review article).
“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”

Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”

Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”

Rheumatic Manifestations of Diabetes Mellitus.

Prevalence of symptoms and signs of shoulder problems in people with diabetes mellitus.

Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”

“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”

Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”

It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).

iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…

v. Freedom of medicine. One quote from the conclusion of Cochran’s post:

“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”

I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:

“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

vi. Economic Costs of Diabetes in the U.S. in 2012.

“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at $4,394, $5,611, and $11,825.”

“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been $166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”

vii. Total red meat intake of ≥0.5 servings/d does not negatively influence cardiovascular disease risk factors: a systemically searched meta-analysis of randomized controlled trials.

viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:

“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”

ix. Some wikipedia links:

Heroic Age of Antarctic Exploration (featured).

Wien’s displacement law.

Kuiper belt (featured).

Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.[42][43][44][45][46][47]“).

Lymphatic filariasis.

File:World map of countries by number of cigarettes smoked per adult per year.

Australian gold rushes.

Savant syndrome (“It is estimated that 10% of those with autism have some form of savant abilities”). A small sidenote of interest to Danish readers: The Danish Broadcasting Corporation recently featured a series about autistics with ‘special abilities’ – the show was called ‘The hidden talents’ (De skjulte talenter), and after multiple people had nagged me to watch it I ended up deciding to do so. Most of the people in that show presumably had some degree of ‘savantism’ combined with autism at the milder end of the spectrum, i.e. Asperger’s. I was somewhat conflicted about what to think about the show and did consider blogging it in detail (in Danish?), but I decided against it. However I do want to add here to Danish readers reading along who’ve seen the show that they would do well to repeatedly keep in mind that a) the great majority of autistics do not have abilities like these, b) many autistics with abilities like these presumably do quite poorly, and c) that many autistics have even greater social impairments than do people like e.g. (the very likeable, I have to add…) Louise Wille from the show).

Quark–gluon plasma.

Simo Häyhä.

Chernobyl liquidators.

Black Death (“Over 60% of Norway’s population died in 1348–1350”).

Renault FT (“among the most revolutionary and influential tank designs in history”).

Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).

W Ursae Majoris variable.

Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).

Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).

Stimming.

Irish Civil War.

March 10, 2017 Posted by | Astronomy, autism, Cardiology, Diabetes, Economics, Epidemiology, Health Economics, History, Infectious disease, Mathematics, Medicine, Papers, Physics, Psychology, Random stuff, Wikipedia | Leave a comment

Anesthesia

“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

Anaesthesia.
General anaesthesia.
Muscle relaxant.
Nociception.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Laudanum.
Dwale.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
Chloroform.
James Young Simpson.
Joseph Thomas Clover.
Barbiturates.
Inhalational anaesthetic.
Antisialagogue.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Propofol.
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngoscopy.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Etomidate.
Ketamine.
Neuromuscular-blocking drug.
Neostigmine.
Sugammadex.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Amylocaine.
Procaine.
Lidocaine.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | Books, Chemistry, Medicine, Papers, Pharmacology | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.

ii.

iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | Books, Chemistry, Chess, Data, dating, Demographics, Genetics, Geography, immigration, Paleontology, Papers, Physics, Psychology, Random stuff, Religion | Leave a comment

The Nature of Statistical Evidence

Here’s my goodreads review of the book.

As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:

Loeve
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.

I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.

On how statistical methods are related to experimental science:

“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”

On mathematics and axiomatic systems (the book goes into much more detail than this):

“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”

It’s all about models:

“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”

But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…

“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]

What should we expect to get out of mathematical and statistical theories of inference?

“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”

On disagreements about the nature of statistical evidence:

“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”

Is mathematics a science?

“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”

Then what about statistics – is statistics a science?

“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”

I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.

Below I’ve added a few links to other stuff also covered in the book:
Propositional calculus.
Kolmogorov’s axioms.
Neyman-Pearson lemma.
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).

October 6, 2015 Posted by | Books, Mathematics, Papers, Philosophy, Science, Statistics | 4 Comments

Stuff

i. World Happiness Report 2013. A few figures from the publication:

Fig 2.2

Fig 2.4

Fig 2.5

ii. Searching for Explanations: How the Internet Inflates Estimates of Internal Knowledge.

“As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge “in the head,” even seeing their own brains as more active as depicted by functional MRI (fMRI) images.”

A little more from the paper:

“If we go to the library to find a fact or call a friend to recall a memory, it is quite clear that the information we seek is not accessible within our own minds. When we go to the Internet in search of an answer, it seems quite clear that we are we consciously seeking outside knowledge. In contrast to other external sources, however, the Internet often provides much more immediate and reliable access to a broad array of expert information. Might the Internet’s unique accessibility, speed, and expertise cause us to lose track of our reliance upon it, distorting how we view our own abilities? One consequence of an inability to monitor one’s reliance on the Internet may be that users become miscalibrated regarding their personal knowledge. Self-assessments can be highly inaccurate, often occurring as inflated self-ratings of competence, with most people seeing themselves as above average [here’s a related link] […] For example, people overestimate their own ability to offer a quality explanation even in familiar domains […]. Similar illusions of competence may emerge as individuals become immersed in transactive memory networks. They may overestimate the amount of information contained in their network, producing a “feeling of knowing,” even when the content is inaccessible […]. In other words, they may conflate the knowledge for which their partner is responsible with the knowledge that they themselves possess (Wegner, 1987). And in the case of the Internet, an especially immediate and ubiquitous memory partner, there may be especially large knowledge overestimations. As people underestimate how much they are relying on the Internet, success at finding information on the Internet may be conflated with personally mastered information, leading Internet users to erroneously include knowledge stored outside their own heads as their own. That is, when participants access outside knowledge sources, they may become systematically miscalibrated regarding the extent to which they rely on their transactive memory partner. It is not that they misattribute the source of their knowledge, they could know full well where it came from, but rather they may inflate the sense of how much of the sum total of knowledge is stored internally.

We present evidence from nine experiments that searching the Internet leads people to conflate information that can be found online with knowledge “in the head.” […] The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge (Experiment 2a and b) and is not driven by a “halo effect” or general overconfidence (Experiment 3). We provide evidence that this effect occurs specifically because information online can so easily be accessed through search (Experiment 4a–c).”

iii. Some words I’ve recently encountered on vocabulary.com: hortatory, adduce, obsequious, enunciate, ineluctable, guerdon, chthonic, condignphilippic, coruscate, exceptionable, colophon, lapidary, rubicund, frumpish, raiment, prorogue, sonorous, metonymy.

iv.

v. I have no idea how accurate this test of chess strength is, (some people in this thread argue that there are probably some calibration issues at the low end) but I thought I should link to it anyway. I’d be very cautious about drawing strong conclusions about over-the-board strength without knowing how they’ve validated the tool. In over-the-board chess you have at minimum a couple of minutes/move on average and this tool never gives you more than 30 seconds, so some slow players will probably suffer using this tool (I’d imagine this is why u/ViktorVamos got such a low estimate). For what it’s worth my Elo estimate was 2039 (95% CI: 1859, 2220).

In related news, I recently defeated my first IM – Pablo Garcia Castro – in a blitz (3 minutes/player) game. It actually felt a bit like an anticlimax and afterwards I was thinking that it would probably have felt like a bigger deal if I’d not lately been getting used to winning the occasional bullet game against IMs on the ICC. Actually I think my two wins against WIM Shiqun Ni during the same bullet session at the time felt like a bigger accomplishment, because that specific session was played during the Women’s World Chess Championship and I realized while looking up my opponent that this woman was actually stronger than one of the contestants who made it to the quarter-finals in that event (Meri Arabidze). On the other hand bullet isn’t really chess, so…

April 15, 2015 Posted by | Astronomy, Chess, Lectures, Papers, Psychology | 2 Comments

Stuff/Links/Open Thread

i. National Health Statistics Reports, Number 49, March 22, 2012 – First Marriages in the United States: Data From the 2006–2010 National Survey of Family Growth.

“This report shows trends and group differences in current marital status, with a focus on first marriages among women and men aged 15–44 years in the United States. Trends and group differences in the timing and duration of first marriages are also discussed. […] The analyses presented in this report are based on a nationally representative sample of 12,279 women and 10,403 men aged 15–44 years in the household population of the United States.”

“In 2006–2010, […] median age at first marriage was 25.8 for women and 28.3 for men.”

“Among women, 68% of unions formed in 1997–2001 began as a cohabitation rather than as a marriage (8). If entry into any type of union, marriage or cohabitation, is taken into account, then the timing of a first union occurs at roughly the same point in the life course as marriage did in the past (9). Given the place of cohabitation in contemporary union formation, descriptions of marital behavior, particularly those concerning trends over time, are more complete when cohabitation is also measured. […] Trends in the current marital statuses of women using the 1982, 1995, 2002, and 2006–2010 NSFG indicate that the percentage of women who were currently in a first marriage decreased over the past several decades, from 44% in 1982 to 36% in 2006–2010 […]. At the same time, the percentage of women who were currently cohabiting increased steadily from 3.0% in 1982 to 11% in 2006– 2010. In addition, the proportion of women aged 15–44 who were never married at the time of interview increased from 34% in 1982 to 38% in 2006–2010.”

“In 2006–2010, the probability of first marriage by age 25 was 44% for women compared with 59% in 1995, a decrease of 25%. By age 35, the probability of first marriage was 84% in 1995 compared with 78% in 2006–2010 […] By age 40, the difference in the probability of age at first marriage for women was not significant between 1995 (86%) and 2006–2010 (84%). These findings suggest that between 1995 and 2006– 2010, women married for the first time at older ages; however, this delay was not apparent by age 40.”

“In 2006–2010, the probability of a first marriage lasting at least 10 years was 68% for women and 70% for men. Looking at 20 years, the probability that the first marriages of women and men will survive was 52% for women and 56% for men in 2006–2010. These levels are virtually identical to estimates based on vital statistics from the early 1970s (24). For women, there was no significant change in the probability of a first marriage lasting 20 years between the 1995 NSFG (50%) and the 2006–2010 NSFG (52%)”

“Women who had no births when they married for the first time had a higher probability of their marriage surviving 20 years (56%) compared with women who had one or more births at the time of first marriage (33%). […] Looking at spousal characteristics, women whose first husbands had been previously married (38%) had a lower probability of their first marriage lasting 20 years compared with women whose first husband had never been married before (54%). Women whose first husband had children from previous relationships had a lower probability that their first marriage would last 20 years (37%) compared with first husbands who had no other children (54%). For men, […] patterns of first marriage survival […] are similar to those shown for women for marriages that survived up to 15 years.”

“These data show trends that are consistent with broad demographic changes in the American family that have occurred in the United States over the last several decades. One such trend is an increase in the time spent unmarried among women and men. For women, there was a continued decrease in the percentage currently married for the first time — and an increase in the percent currently cohabiting — in 2006–2010 compared with earlier years. For men, there was also an increase in the percentage unmarried and in the percentage currently cohabiting between 2002 and 2006–2010. Another trend is an increase in the age at first marriage for women and men, with men continuing to marry for the first time at older ages than women. […] Previous research suggests that women with more education and better economic prospects are more likely to delay first marriage to older ages, but are ultimately more likely to become married and to stay married […]. Data from the 2006–2010 NSFG support these findings”

ii. Involuntary Celibacy: A life course analysis (review). This is not a link to the actual paper – the paper is not freely available, which is why I do not link to it – but rather a link to a report talking about what’s in that paper. However I found some of the stuff interesting:

“A member of an on-line discussion group for involuntary celibates approached the first author of the paper via email to ask about research on involuntary celibacy. It soon became apparent that little had been done, and so the discussion group volunteered to be interviewed and a research team was put together. An initial questionnaire was mailed to 35 group members, and they got a return rate of 85%. They later posted it to a web page so that other potential respondents had access to it. Eventually 60 men and 22 women took the survey.”

“Most were between the ages of 25-34, 28% were married or living with a partner, 89% had attended or completed college. Professionals (45%) and students (16%) were the two largest groups. 85% of the sample was white, 89% were heterosexual. 70% lived in the U.S. and the rest primarily in Western Europe, Canada and Australia. […] the value of this research lies in the rich descriptive data obtained about the lives of involuntary celibates, a group about which little is known. […] The questionnaire contained 13 categorical, close-ended questions assessing demographic data such as age, sex, marital status, living arrangement, income, education, employment type, area of residence, race/ethnicity, sexual orientation, religious preference, political views, and time spent on the computer. 58 open-ended questions investigated such areas as past sexual experiences, current relationships, initiating relationships, sexuality and celibacy, nonsexual relationships and the consequences of celibacy. They started out by asking about childhood experiences, progressed to questions about teen and early adult years and finished with questions about current status and the effects of celibacy.”

“78% of this sample had discussed sex with friends, 84% had masturbated as teens. The virgins and singles, however, differed from national averages in their dating and sexual experiences.”

“91% of virgins and 52 % of singles had never dated as teenagers. Males reported hesitancy in initiating dates, and females reporting a lack of invitations by males. For those who did date, their experiences tended to be very limited. Only 29% of virgins reported first sexual experiences that involved other people, and they frequently reported no sexual activity at all except for masturbation. Singles were more likely than virgins to have had an initial sexual experience that involved other people (76%), but they tended to report that they were dissatisfied with the experience. […] While most of the sample had discussed sex with friends and masturbated as teens, most virgins and singles did not date. […] Virgins and singles may have missed important transitions, and as they got older, their trajectories began to differ from those of their age peers. Patterns of sexuality in young adulthood are significantly related to dating, steady dating and sexual experience in adolescence. It is rare for a teenager to initiate sexual activity outside of a dating relationship. While virginity and lack of experience are fairly common in teenagers and young adults, by the time these respondents reached their mid-twenties, they reported feeling left behind by age peers. […] Even for the heterosexuals in the study, it appears that lack of dating and sexual experimentation in the teen years may be precursors to problems in adult sexual relationships.”

“Many of the virgins reported that becoming celibate involved a lack of sexual and interpersonal experience at several different transition points in adolescence and young adulthood. They never or rarely dated, had little experience with interpersonal sexual activity, and had never had sexual intercourse. […] In contrast, partnered celibates generally became sexually inactive by a very different process. All had initially been sexually active with their partners, but at some point stopped. At the time of the survey, sexual intimacy no longer or very rarely occurred in their relationships. The majority of them (70%) started out having satisfactory relationships, but they slowly stopped having sex as time went on.”

“shyness was a barrier to developing and maintaining relationships for many of the respondents. Virgins (94%) and singles (84%) were more likely to report shyness than were partnered respondents (20%). The men (89%) were more likely to report being shy than women (77%). 41% of virgins and 23% of singles reported an inability to relate to others socially. […] 1/3 of the respondents thought their weight, appearance, or physical characteristics were obstacles to attracting potential partners. 47% of virgins and 56% of singles mentioned these factors, compared to only 9% of partnered people. […] Many felt that their sexual development had somehow stalled in an earlier stage of life; feeling different from their peers and feeling like they will never catch up. […] All respondents perceived their lack of sexual activity in a negative light and in all likelihood, the relationship between involuntary celibacy and unhappiness, anger and depression is reciprocal, with involuntary celibacy contributing to negative feelings, but these negative feelings also causing people to feel less self-confident and less open to sexual opportunities when they occur. The longer the duration of the celibacy, the more likely our respondents were to view it as a permanent way of life. Virginal celibates tended to see their condition as temporary for the most part, but the older they were, the more likely they were to see it as permanent, and the same was true for single celibates.”

It seems to me from ‘a brief look around’ that not a lot of research has been done on this topic, which I find annoying. Because yes, I’m well aware these are old data and that the sample is small and ‘convenient’. Here’s a brief related study on the ‘Characteristics of adult women who abstain from sexual intercourse‘ – the main findings:

“Of the 1801 respondents, 244 (14%) reported abstaining from intercourse in the past 6 months. Univariate analysis revealed that abstinent women were less likely than sexually active women to have used illicit drugs [odds ratio (OR) 0.47; 95% CI 0.35–0.63], to have been physically abused (OR 0.44, 95% CI 0.31–0.64), to be current smokers (OR 0.59, 95% CI 0.45–0.78), to drink above risk thresholds (OR 0.66, 95% CI 0.49–0.90), to have high Mental Health Inventory-5 scores (OR 0.7, 95% CI 0.54–0.92) and to have health insurance (OR 0.74, 95% CI 0.56–0.98). Abstinent women were more likely to be aged over 30 years (OR 1.98, 95% CI 1.51–2.61) and to have a high school education (OR 1.38, 95% CI 1.01–1.89). Logistic regression showed that age >30 years, absence of illicit drug use, absence of physical abuse and lack of health insurance were independently associated with sexual abstinence.

Conclusions

Prolonged sexual abstinence was not uncommon among adult women. Periodic, voluntary sexual abstinence was associated with positive health behaviours, implying that abstinence was not a random event. Future studies should address whether abstinence has a causal role in promoting healthy behaviours or whether women with a healthy lifestyle are more likely to choose abstinence.”

Here’s another more recent study – Prevalence and Predictors of Sexual Inexperience in Adulthood (unfortunately I haven’t been able to locate a non-gated link) – which I found and may have a closer look at later. A few quotes/observations:

“By adulthood, sexual activity is nearly universal: 97 % of men and 98 % of women between the ages of 25-44 report having had vaginal intercourse (Mosher, Chandra, & Jones, 2005). […] Although the majority of individuals experience this transition during adolescence or early adulthood, a small minority remain sexually inexperienced far longer. Data from the NSFG indicate that about 5% of males and 3% of females between the ages of 25 and 29 report never having had vaginal sex (Mosher et al., 2005). While the percentage of sexually inexperienced participants drops slightly among older age groups, between 1 and 2% of both males and females continue to report that they have never had vaginal sex even into their early 40s. Other nationally representative surveys have yielded similar estimates of adult sexual inexperience (Billy, Tanfer, Grady, & Klepinger, 1993)”

“Individuals who have not experienced any type of sexual activity as adults […] may differ from those who only abstain from vaginal intercourse. For example, vaginal virgins who engage in “everything but” vaginal sex – sometimes referred to as “technical virgins” […] – may abstain from vaginal sex in order to avoid its potential negative consequences […]. In contrast, individuals who have neither coital nor noncoital experience may have been unable to attract sexual partners or may have little interest in sexual involvement. Because prior analyses have generally conflated these two populations, we know virtually nothing about the prevalence or characteristics of young adults who have abstained from all types of sexual activity.”

“We used data from 2,857 individuals who participated in Waves I–IV of the National Longitudinal Study of Adolescent Health (Add Health) and reported no sexual activity (i.e., oral-genital, vaginal, or anal sex) by age 18 to identify, using discrete-time survival models, adolescent sociodemographic, biosocial, and behavioral characteristics that predicted adult sexual inexperience. The mean age of participants at Wave IV was 28.5 years (SD = 1.92). Over one out of eight participants who did not initiate sexual activity during adolescence remained abstinent as young adults. Sexual non-attraction significantly predicted sexual inexperience among both males (aOR = 0.5) and females (aOR = 0.6). Males also had lower odds of initiating sexual activity after age 18 if they were non-Hispanic Asian, reported later than average pubertal development, or were rated as physically unattractive (aORs = 0.6–0.7). Females who were overweight, had lower cognitive performance, or reported frequent religious attendance had lower odds of sexual experience (aORs = 0.7–0.8) while those who were rated by the interviewers as very attractive or whose parents had lower educational attainment had higher odds of sexual experience (aORs = 1.4–1.8). Our findings underscore the heterogeneity of this unique population and suggest that there are a number of different pathways that may lead to either voluntary or involuntary adult sexual inexperience.”

iii. Association between breastfeeding and intelligence, educational attainment, and income at 30 years of age: a prospective birth cohort study from Brazil.

“Breastfeeding has clear short-term benefits, but its long-term consequences on human capital are yet to be established. We aimed to assess whether breastfeeding duration was associated with intelligence quotient (IQ), years of schooling, and income at the age of 30 years, in a setting where no strong social patterning of breastfeeding exists. […] A prospective, population-based birth cohort study of neonates was launched in 1982 in Pelotas, Brazil. Information about breastfeeding was recorded in early childhood. At 30 years of age, we studied the IQ (Wechsler Adult Intelligence Scale, 3rd version), educational attainment, and income of the participants. For the analyses, we used multiple linear regression with adjustment for ten confounding variables and the G-formula. […] From June 4, 2012, to Feb 28, 2013, of the 5914 neonates enrolled, information about IQ and breastfeeding duration was available for 3493 participants. In the crude and adjusted analyses, the durations of total breastfeeding and predominant breastfeeding (breastfeeding as the main form of nutrition with some other foods) were positively associated with IQ, educational attainment, and income. We identified dose-response associations with breastfeeding duration for IQ and educational attainment. In the confounder-adjusted analysis, participants who were breastfed for 12 months or more had higher IQ scores (difference of 3,76 points, 95% CI 2,20–5,33), more years of education (0,91 years, 0,42–1,40), and higher monthly incomes (341,0 Brazilian reals, 93,8–588,3) than did those who were breastfed for less than 1 month. The results of our mediation analysis suggested that IQ was responsible for 72% of the effect on income.”

This is a huge effect size.

iv. Grandmaster blunders (chess). This is quite a nice little collection; some of the best players in the world have actually played some really terrible moves over the years, which I find oddly comforting in a way..

v. History of the United Kingdom during World War I (wikipedia, ‘good article’). A few observations from the article:

“In 1915, the Ministry of Munitions under David Lloyd-George was formed to control munitions production and had considerable success.[113][114] By April 1915, just two million rounds of shells had been sent to France; by the end of the war the figure had reached 187 million,[115] and a year’s worth of pre-war production of light munitions could be completed in just four days by 1918.”

“During the war, average calories intake [in Britain] decreased only three percent, but protein intake six percent.[47]

“Energy was a critical factor for the British war effort. Most of the energy supplies came from coal mines in Britain, where the issue was labour supply. Critical however was the flow of oil for ships, lorries and industrial use. There were no oil wells in Britain so everything was imported. The U.S. pumped two-thirds of the world’s oil. In 1917, total British consumption was 827 million barrels, of which 85 percent was supplied by the United States, and 6 percent by Mexico.”

“In the post war publication Statistics of the Military Effort of the British Empire During the Great War 1914–1920 (The War Office, March 1922), the official report lists 908,371 ‘soldiers’ as being either killed in action, dying of wounds, dying as prisoners of war or missing in action in the World War. (This is broken down into the United Kingdom and its colonies 704,121; British India 64,449; Canada 56,639; Australia 59,330; New Zealand 16,711; South Africa 7,121.) […] The civilian death rate exceeded the prewar level by 292,000, which included 109,000 deaths due to food shortages and 183,577 from Spanish Flu.”

vi. House of Plantagenet (wikipedia, ‘good article’).

vii. r/Earthp*rn. There are some really nice pictures here…

March 24, 2015 Posted by | Chess, Demographics, History, IQ, Papers | Leave a comment

Random stuff

i. I’ve been slightly more busy than usual lately, which has had as a consequence that I’ve been reading slightly less than usual. In a way this stuff has had bigger ‘tertiary effects’ (on blogging) than ‘secondary effects’ (on reading); I’ve not read that much less than usual, but reading and blogging are two different things, and blog-posts don’t write themselves. Sometimes it’s much easier for me to justify reading books than it is for me to justify spending time blogging books I’ve read. I just finished Newman and Kohn’s excellent book Evidence-Based Diagnosis, but despite this being an excellent book and despite me having already written a bit of stuff about the book in a post draft, I just don’t feel like finishing that blog post now. But I also don’t feel like letting any more time pass without an update – thus this post.

ii. On another reading-related matter, I should note that even assuming (a strong assumption here) the people they asked weren’t lying, these numbers seem low:

“Descriptive analysis indicated that the hours students spent weekly (M) on academic reading (AR), extracurricular reading (ER), and the Internet (INT), were 7.72 hours, 4.24 hours, and 8.95 hours, respectively.”

But on the other hand the estimate of 19.4 hours of weekly reading reported here (table 1, page 281) actually seems to match that estimate reasonably well (the sum of the numbers in the quote is ~20.9). Incidentally don’t you also just love when people report easily convertible metrics/units like these – ‘8.95 hours’..? Anyway, if the estimates are true, (some samples of…) college students read roughly 3 hours per day on average over the course of a week, including internet reading (which makes up almost half of the total and may or may not – you can’t really tell from the abstract – include academic stuff like stuff from journals…). I sometimes get curious about these sorts of things, and/but then I usually quickly get annoyed because it’s so difficult to get good data, and no good data seem to exist anywhere on such matters. This is in a way perfectly understandable (but also frustrating); I don’t even have a good idea what would be a good estimate of the ‘average’ number of hours I spend reading on an ‘average’ day, and I’m painfully aware of the fact that you can’t get access to that sort of information just by doing something simple like recording the number of hours/minutes spent reading during the day each day, for obvious reasons; the number would likely cease to be particularly relevant once the data recording process were to stop, even assuming there was no measurement error (’rounding up’). Such schemes might be a way to increase the amount of reading short-term (but if they are, why are they not already used in schools? Or perhaps they are?), but unless the scheme is implemented permanently the data derived from it are not going to be particularly relevant to anything later on. I don’t think unsophisticated self-reports which simply ask people how much they read are particularly useful, but if one assumes such estimates will always tend to overestimate the amount of reading going on, such metrics still do add some value (this is related to a familiar point also made in Newman & Kohn; knowing that an estimate is biased is very different from having to conclude that the estimate is useless. Biased estimates can often add information even if you know they’re biased, and this is especially the case if you know in which direction the estimate is most likely to be biased). Having said this, here are some more numbers from a different source:

“Nearly 52 percent of Americans 18–24 years of age, and just over 50 percent of all American adults, read books for pleasure […] Bibby, et al. (2009) reported that 47 percent of Canadian teenagers 15–19 years of age received a “great deal” or “quite a bit” of pleasure from reading. […] Young Canadian readers were more likely to be female than male: 56 percent of those who reported pleasure reading were female, while only 35 percent were male […] In 2009, the publishing industry reported that men in the United States only accounted for 29 percent of purchases made within the adult fiction market, compared to 40 percent of the U.K. market (Bowker LLC, 2009). The NEA surveys also consistently suggest that more women read than men: about 42 percent of men are voluntary readers of literature (defined as novels, short stories, poems, or plays in print or online), compared to 58 percent of women […] Unfortunately the NEA studies do not include in–depth reading for work or school. If this were included, the overall rates and breakdowns by sex might look very different. […] While these studies suggest that reading is enjoyed by a substantial number of North Americans, on the flip side, about half of the populations surveyed are not readers.”

“In 2008, 98 percent of Canadian high school students aged 15 to 19 were using computers one hour a day or more (Bibby, et al., 2009). About one half of those teenagers were using their computers at least two hours a day, while another 20 percent were on their computers for three to four hours, and 20 percent used their computers five hours or more each day […]. More recently it has been reported that 18–34 year old Canadians are spending an average of 20 hours a week online (Ipsos, 2010). […] A Canadian study using the Statistics Canada 2005 General Social Survey found that both heavy and moderate Internet users spend more time reading books than people who do not use the Internet, although people in all three categories of Internet usage read similar numbers of magazines and newspapers”

It was my impression while reading this that it did not seem to have occurred to the researchers here that one might use a personal computer to read books (instead of an e-reader); that people don’t just use computers to read stuff online (…and play games, and watch movies, etc.), but that you can also use a computer to read books. It may not just be that ‘the sort of people who spend much time online are also the sort of people who’re more likely to read books when they’re not online’; it may also be that some of those ‘computer hours’ are actually ‘book hours’. I much prefer to read books on my computer to reading books on my e-reader if both options are available (of course one point of having an e-reader is that it’s often the case that both options are not available), and I don’t see any good reason to assume that I’m the only person feeling that way.

ii. Here’s a list of words I’ve encountered on vocabulary.com recently:

Recreant.
Dissimulate.
Susurration.
Derringer.
Orison.
Provender.
Sashay.
Lagniappe.
Jejune.
Patois.
Vituperation.
Nebbish.
Sojourn.

While writing this post I realized that the Merriam-Webster site also has a quiz one can play around with if one likes. I don’t think it’s nearly as useful as vocabulary.com’s approach if you want to learn new words, but I’m not sure how fair it is to even compare the two. I scored much higher than average the four times I took the test, but I didn’t like a couple of the questions in the second test because it seemed to me there were multiple correct answers. One of the ways in which vocabulary.com is clearly superior to this sort of test is of course that you’re able to provide them with feedback about issues like these, which in the long run should serve to minimize the number of problematic questions in the sample.

If you haven’t read along here very long you’ll probably not be familiar with the vocabulary.com site, and in that case you might want to read this previous post on the topic.

iii. A chess kibitzing video:

Just to let you know this is a thing, in case you didn’t know. I enjoy watching strong players play chess; it’s often quite a bit more fun than playing yourself.

iv. “a child came to the hospital with cigarette burns dotting his torso. almost every patch of skin that could be covered with a tee shirt was scarred. some of the marks were old, some were very fresh.

his parents said it was a skin condition.”

Lots of other heartwarming stories in this reddit thread. I’m actually not quite sure why I even read those; some of them are really terrible.

December 19, 2014 Posted by | Books, Chess, Papers, Personal, Random stuff | Leave a comment