“Einstein emerges from this collection of quotes, drawn from many different sources, as a complete and fully rounded human being […] Knowledge of the darker side of Einstein’s life makes his achievement in science and in public affairs even more miraculous. This book shows him as he was – not a superhuman genius but a human genius, and all the greater for being human.”
I’ve recently read The Ultimate Quotable Einstein, from the foreword of which the above quote is taken, which contains roughly 1600 quotes by or about Albert Einstein; most of the quotes are by Einstein himself, but the book also includes more than 50 pages towards the end of the book containing quotes by others about him. I was probably not in the main target group, but I do like good quote collections and I figured there might be enough good quotes in the book for it to make sense for me to give it a try. On the other hand after having read the foreword by Freeman Dyson I knew there would probably be a lot of quotes in the book which I probably wouldn’t find too interesting; I’m not really sure why I should give a crap if/why a guy who died more than 60 years ago and whom I have never met and never will was having an affair during the early 1920s, or why I should care what Einstein thought about his mother or his ex-wife, but if that kind of stuff interests you the book has stuff about those kinds of things as well. My own interest in Einstein, such as it is, is mainly in ‘Einstein the scientist’ (and perhaps also in this particular context ‘Einstein the aphorist’), not ‘Einstein the father’ or ‘Einstein the husband’. I also don’t find the political views which he held to be very interesting, but again if you want to know what Einstein thought about things like Zionism, pacifism, and world government the book includes quotes about such topics as well.
Overall I should say that I was a little underwhelmed by the book and the quotes it includes, but I would also note that people who are interested in knowing more about Einstein will likely find a lot of valuable source material here, and that I did give the book 3 stars on goodreads. I did learn a lot of new things about Einstein by reading the book, but this is not surprising given how little I knew about him before I started reading the book; for example I had no idea that he was offered the presidency of Israel a few years before his death. I noticed only two quotes which were included more than once (a quote on pages 187-188 was repeated on page 453, and a quote on page 295 was repeated on page 455), and although I cannot guarantee that there aren’t any other repeats almost all quotes included in the book are unique, in the sense that they’re only included once in the coverage. However it should also be mentioned in this context that there are a few quotes on specific themes which are very similar to other quotes included elsewhere in the coverage. I do consider this unavoidable considering the number of quotes included, though.
I have included some sample quotes from the book below – I have tried to include quotes on a wide variety of topics. All quotes without a source below are sourced quotes by Einstein (the book also contains a small collection of quotes ‘attributed to Einstein’, many of which are either not sourced or sourced in such a manner that Calaprice did not feel convinced that the quote was actually by Einstein – none of the quotes from that part of the book’s coverage are included below).
“When a blind beetle crawls over the surface of a curved branch, it doesn’t notice that the track it has covered is indeed curved. I was lucky enough to notice what the beetle didn’t notice.” (“in answer to his son Eduard’s question about why he is so famous, 1922.”)
“Teaching should be such that what is offered is perceived as a valuable gift and not as a hard duty.”
“I am not prepared to accept all his conclusions, but I consider his work an immensely valuable contribution to the science of human behavior.” (Einstein said this about Sigmund Freud during an interview. Yeah…)
“I consider him the best of the living writers.” (on Bertrand Russell. Russell incidentally also admired Einstein immensely – the last part of the book, including quotes by others about Einstein, includes this one by him: “Of all the public figures that I have known, Einstein was the one who commanded my most wholehearted admiration.”)
“I cannot understand the passive response of the whole civilized world to this modern barbarism. Doesn’t the world see that Hitler is aiming for war?” (1933. Related link.)
“Children don’t heed the life experience of their parents, and nations ignore history. Bad lessons always have to be learned anew.”
“Few people are capable of expressing with equanimity opinions that differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.”
“Sometimes one pays most for things one gets for nothing.”
“Thanks to my fortunate idea of introducing the relativity principle into physics, you (and others) now enormously overrate my scientific abilities, to the point where this makes me quite uncomfortable.” (To Arnold Sommerfeld, 1908)
“No fairer destiny could be allotted to any physical theory than that it should of itself point out the way to the introduction of a more comprehensive theory, in which it lives on as a limiting case.”
“Mother nature, or more precisely an experiment, is a resolute and seldom friendly referee […]. She never says “yes” to a theory; but only “maybe” under the best of circumstances, and in most cases simply “no”.”
“The aim of science is, on the one hand, a comprehension, as complete as possible, of the connection between the sense experiences in their totality, and, on the other hand, the accomplishment of this aim by the use of a minimum of primary concepts and relations.” A related quote from the book: “Although it is true that it is the goal of science to discover rules which permit the association and foretelling of facts, this is not its only aim. It also seeks to reduce the connections discovered to the smallest possible number of mutually independent conceptual elements. It is in this striving after the rational unification of the manifold that it encounters its greatest successes.”
“According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables – the coordinates of space and time.”
“One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.”
“”Why should I? Everybody knows me there” (upon being told by his wife to dress properly when going to the office). “Why should I? No one knows me there” (upon being told to dress properly for his first big conference).”
“Marriage is but slavery made to appear civilized.”
“Nothing is more destructive of respect for the government and the law of the land than passing laws that cannot be enforced.”
“Einstein would be one of the greatest theoretical physicists of all time even if he had not written a single line on relativity.” (Max Born)
“Einstein’s [violin] playing is excellent, but he does not deserve his world fame; there are many others just as good.” (“A music critic on an early 1920s performance, unaware that Einstein’s fame derived from physics, not music. Quoted in Reiser, Albert Einstein, 202-203″)
Here’s my goodreads review of the book. As mentioned in the review, the book was overall a slightly disappointing read – but there were some decent quotes included in the book, and I decided that I ought to post a post with some sample quotes here as it would be a relatively easy post to write. Do note while reading this post that the book had a lot of bad quotes, so you should not take the sample quotes I’ve posted below to be representative of the book’s coverage in general.
i. “The aim of science is to seek the simplest explanation of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be “Seek simplicity and distrust it.”” (Alfred North Whitehead)
ii. “Poor data and good reasoning give poor results. Good data and poor reasoning give poor results. Poor data and poor reasoning give rotten results.” (Edmund C. Berkeley)
iii. “By no process of sound reasoning can a conclusion drawn from limited data have more than a limited application.” (J.W. Mellor)
iv. “The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” (Ernest Rutherford, 1933).
v. “An experiment is a question which science poses to Nature, and a measurement is the recording of Nature’s answer.” (Max Planck)
vi. “A fact doesn’t have to be understood to be true.” (Heinlein)
vii. “God was invented to explain mystery. God is always invented to explain those things that you do not understand. Now, when you finally discover how something works, you get some laws which you’re taking away from God; you don’t need him anymore. But you need him for the other mysteries. So therefore you leave him to create the universe because we haven’t figured that out yet; you need him for understanding those things which you don’t believe the laws will explain, such as consciousness, or why you only live to a certain length of time – life and death – stuff like that. God is always associated with those things that you do not understand.” (Feynman)
viii. “Hypotheses are the scaffolds which are erected in front of a building and removed when the building is completed. They are indispensable to the worker; but he must not mistake the scaffolding for the building.” (Goethe)
ix. “We are to admit no more cause of natural things than such as are both true and sufficient to explain their appearances.” (Newton)
x. “It is the province of knowledge to speak and it is the privilege of wisdom to listen.” (Oliver Wendell Holmes)
xi. “Light crosses space with the prodigious velocity of 6,000 leagues per second.
La Science Populaire
April 28, 1881″
“A typographical error slipped into our last issue that is important to correct. The speed of light is 76,000 leagues per hour – and not 6,000.
La Science Populaire
May 19, 1881″
“A note correcting a first error appeared in our issue number 68, indicating that the speed of light is 76,000 leagues per hour. Our readers have corrected this new error. The speed of light is approximately 76,000 leagues per second.
La Science Populaire
xii. “All models are wrong but some are useful.” (G. E. P. Box)
xiii. “the downward movement of a mass of gold or lead, or of any other body endowed with weight, is quicker in proportion to its size.” (Aristotle)
xiv. “those whom devotion to abstract discussions has rendered unobservant of the facts are too ready to dogmatize on the basis of a few observations” (-ll-).
xv. “it may properly be asked whether science can be undertaken without taking the risk of skating on the possibly thin ice of supposition. The important thing to know is when one is on the more solid ground of observation and when one is on the ice.” (W. M. O’Neil)
xvi. “If I could remember the names of all these particles, I’d be a botanist.” (Enrico Fermi)
xvii. “Theoretical physicists are accustomed to living in a world which is removed from tangible objects by two levels of abstraction. From tangible atoms we move by one level of abstraction to invisible fields and particles. A second level of abstraction takes us from fields and particles to the symmetry-groups by which fields and particles are related. The superstring theory takes us beyond symmetry-groups to two further levels of abstraction. The third level of abstraction is the interpretation of symmetry-groups in terms of states in ten-dimensional space-time. The fourth level is the world of the superstrings by whose dynamical behavior the states are defined.” (Freeman Dyson)
xviii. “Space tells matter how to move . . . and matter tells space how to curve.” (John Wheeler)
xix. “the universe is not a rigid and inimitable edifice where independent matter is housed in independent space and time; it is an amorphous continuum, without any fixed architecture, plastic and variable, constantly subject to change and distortion. Wherever there is matter and motion, the continuum is disturbed. Just as a fish swimming in the sea agitates the water around it, so a star, a comet, or a galaxy distorts the geometry of the space-time through which it moves.” (Lincoln Barnett)
xx. “most physicists today place the probability of the existence of tachyons only slightly higher than the existence of unicorns” (Nick Herbert).
ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)
iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).
iv. “A first-rate theory predicts; a second-rate theory forbids and a third-rate theory explains after the event.” (Aleksander Isaakovich Kitaigorodski)
v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).
vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)
vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)
viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)
ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)
x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)
xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)
xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).
xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)
xiv. “There is more self-love than love in jealousy.” (-ll-)
xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)
xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)
xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)
xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)
xix. “Give every man thy ear, but few thy voice” (Shakespeare).
xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)
As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.
I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.
On how statistical methods are related to experimental science:
“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”
On mathematics and axiomatic systems (the book goes into much more detail than this):
“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”
It’s all about models:
“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”
But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…
“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]
What should we expect to get out of mathematical and statistical theories of inference?
“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”
On disagreements about the nature of statistical evidence:
“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”
Is mathematics a science?
“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”
Then what about statistics – is statistics a science?
“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”
I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.
Below I’ve added a few links to other stuff also covered in the book:
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).
“This book was originally developed alongside the lecture Systems Analysis at the Swiss Federal Institute of Technology (ETH) Zürich, on the basis of lecture notes developed over 12 years. The lecture, together with others on analysis, differential equations and linear algebra, belongs to the basic mathematical knowledge imparted on students of environmental sciences and other related areas at ETH Zürich. […] The book aims to be more than a mathematical treatise on the analysis and modeling of natural systems, yet a certain set of basic mathematical skills are still necessary. We will use linear differential equations, vector and matrix calculus, linear algebra, and even take a glimpse at nonlinear and partial differential equations. Most of the mathematical methods used are covered in the appendices. Their treatment there is brief however, and without proofs. Therefore it will not replace a good mathematics textbook for someone who has not encountered this level of math before. […] The book is firmly rooted in the algebraic formulation of mathematical models, their analytical solution, or — if solutions are too complex or do not exist — in a thorough discussion of the anticipated model properties.”
I finished the book yesterday – here’s my goodreads review (note that the first link in this post was not to the goodreads profile of the book for the reason that goodreads has listed the book under the wrong title). I’ve never read a book about ‘systems analysis’ before, but as I also mention in the goodreads review it turned out that much of this stuff was stuff I’d seen before. There are 8 chapters in the book. Chapter one is a brief introductory chapter, the second chapter contains a short overview of mathematical models (static models, dynamic models, discrete and continuous time models, stochastic models…), the third chapter is a brief chapter about static models (the rest of the book is about dynamic models, but they want you to at least know the difference), the fourth chapter deals with linear (differential equation) models with one variable, chapter 5 extends the analysis to linear models with several variables, chapter 6 is about non-linear models (covers e.g. the Lotka-Volterra model (of course) and the Holling-Tanner model (both were covered in Ecological Dynamics, in much more detail)), chapter 7 deals briefly with time-discrete models and how they are different from continuous-time models (I liked Gurney and Nisbet’s coverage of this stuff a lot better, as that book had a lot more details about these things) and chapter 8 concludes with models including both a time- and a space-dimension, which leads to coverage of concepts such as mixing and transformation, advection, diffusion and exchange in a model context.
How to derive solutions to various types of differential equations, how to calculate eigenvalues and what these tell you about the model dynamics (and how to deal with them when they’re imaginary), phase diagrams/phase planes and topographical maps of system dynamics, fixed points/steady states and their properties, what’s an attractor?, what’s hysteresis and in which model contexts might this phenomenon be present?, the difference between homogeneous and non-homogeneous differential equations and between first order- and higher-order differential equations, which role do the initial conditions play in various contexts?, etc. – it’s this kind of book. Applications included in the book are varied; some of the examples are (as already mentioned) derived from the field of ecology/mathematical biology (there are also e.g. models of phosphate distribution/dynamics in lakes and models of fish population dynamics), others are from chemistry (e.g. models dealing with gas exchange – Fick’s laws of diffusion are e.g. covered in the book, and they also talk about e.g. Henry’s law), physics (e.g. the harmonic oscillator, the Lorenz model) – there are even a few examples from economics (e.g. dealing with interest rates). As they put it in the introduction, “Although most of the examples used here are drawn from the environmental sciences, this book is not an introduction to the theory of aquatic or terrestrial environmental systems. Rather, a key goal of the book is to demonstrate the virtually limitless practical potential of the methods presented.” I’m not sure if they succeeded, but it’s certainly clear from the coverage that you can use the tools they cover in a lot of different contexts.
I’m not quite sure how much mathematics you’ll need to know in order to read and understand this book on your own. In the coverage they seem to me to assume some familiarity with linear algebra, multi-variable calculus, complex analysis (/related trigonometry) (perhaps also basic combinatorics – for example factorials are included without comments about how they work). You should probably take the authors at their words when they say above that the book “will not replace a good mathematics textbook for someone who has not encountered this level of math before”. A related observation is also that regardless of whether you’ve seen this sort of stuff before or not, this is probably not the sort of book you’ll be able to read in a day or two.
I think I’ll try to cover the book in more detail (with much more specific coverage of some main points) tomorrow.
“There are both costs and benefits associated with conducting scientific- and technological research. Whereas the benefits derived from scientific research and new technologies have often been addressed in the literature (for a good example, see Evenson et al., 1979), few of the major non-monetary societal costs associated with major expenditures on scientific research and technology have however so far received much attention.
In this paper we investigate one of the major non-monetary societal cost variables associated with the conduct of scientific and technological research in the United States, namely the suicides resulting from research activities. In particular, in this paper we analyze the association between scientific- and technological research expenditure patterns and the number of suicides committed using one of the most common suicide methods, namely that of hanging, strangulation and suffocation (-HSS). We conclude from our analysis that there’s a very strong association between scientific research expenditures in the US and the frequency of suicides committed using the HSS method, and that this relationship has been stable for at least a decade. An important aspect in the context of the association is the precise mechanisms through which the increase in HHSs takes place. Although the mechanisms are still not well-elucidated, we suggest that one of the important components in this relationship may be judicial research, as initial analyses of related data have suggested that this variable may be important. We argue in the paper that our initial findings in this context provide impetus for considering this pathway a particularly important area of future research in this field.”
“Murders by bodily force (-Mbf) make up a substantial number of all homicides in the US. Previous research on the topic has shown that this criminal activity causes the compromise of some common key biological functions in victims, such as respiration and cardiac function, and that many people with close social relationships with the victims are psychosocially affected as well, which means that this societal problem is clearly of some importance.
Researchers have known for a long time that the marital state of the inhabitants of the state of Mississippi and the dynamics of this variable have important nation-wide effects. Previous research has e.g. analyzed how the marriage rate in Mississippi determines the US per capita consumption of whole milk. In this paper we investigate how the dynamics of Mississippian marital patterns relate to the national Mbf numbers. We conclude from our analysis that it is very clear that there’s a strong association between the divorce rate in Mississippi and the national level of Mbf. We suggest that the effect may go through previously established channels such as e.g. milk consumption, but we also note that the precise relationship has yet to be elucidated and that further research on this important topic is clearly needed.”
This abstract is awesome as well, but I didn’t write it…
The ‘funny’ part is that I could actually easily imagine papers not too dissimilar to the ones just outlined getting published in scientific journals. Indeed, in terms of the structure I’d claim that many published papers are exactly like this. They do significance testing as well, sure, but hunting down p-values is not much different from hunting down correlations and it’s quite easy to do both. If that’s all you have, you haven’t shown much.
I read the book yesterday. Here’s what I wrote on goodreads:
“I’m not rating this, but I’ll note that ‘it’s an interesting model.’
I’d only really learned (…heard?) about Kuhn’s ideas through cultural osmosis (and/or perhaps a brief snippet of his work in HS? Maybe. I honestly can’t remember if we read Kuhn back then…). It’s worth actually reading the book, and I should probably have done that a long time ago.”
I was thinking about just quoting extensively from the work in this post in order to make clear what the book is about, but I’m not sure this is actually the best way to proceed. I know some readers of this blog have already read Kuhn, so it may in some sense be more useful if I say a little bit about what I think about the things he’s said, rather than focusing only on what he’s said in the work. I’ve tried to make this the sort of post that can be read and enjoyed both by people who have not read Kuhn, and by people who have, though I may not have been successful. That said, I have felt it necessary to include at least a few quotes from the work along the way in the following, in order not to misrepresent Kuhn too much.
So anyway, ‘the general model’ Kuhn has of science is one where there are three states of science. ‘Normal science’ is perhaps the most common state (this is actually not completely clear as I don’t think he ever explicitly says as much (I may be wrong), and the inclusion of concepts like ‘mini-revolutions’ (the ‘revolutions can happen on many levels’-part) makes things even less clear, but I don’t think this is an unreasonable interpretation), where scientists in a given field has adopted a given paradigm and work and tinker with stuff within that paradigm, exploring all the nooks and crannies: “‘normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for it further practice.” Exactly what a paradigm is is still a bit unclear to me, as he seems to me to be using the term in a lot of different ways (“One sympathetic reader, who shares my conviction that ‘paradigm’ names the central philosophical elements of the book, prepared a partial analytic index and concluded that the term is used in at least twenty-two different ways.” – a quote from the postscript).
So there’s ‘normal science’, where everything is sort of proceeding according to plan. And then there are two other states: A state of crisis, and a state of revolution. A crisis state is a state which comes about when the scientists working in their nooks and crannies gradually come to realize that perhaps the model of the world they’ve been using (‘paradigm’) may not be quite right. Something is off, the model has problems explaining some of the results – so they start questioning some of the defining assumptions. During a crisis scientists become less constrained by the paradigm when looking at the world, research becomes in some sense more random; a lot of new ideas pop up as to how to deal with the problem(s), and at some point a scientific revolution resolves the crisis – a new model replaces the old one, and the scientists can go back to doing ‘normal science’ work, which is now defined by the new paradigm rather than the old one. Young people and/or people not too closely affiliated with the old model/paradigm are, Kuhn argues, more likely to come up with the new idea that will resolve the problem which caused the crisis, and young people and new people in the field are more likely than their older colleagues to ‘convert’ to the new way of thinking. Such dynamics are actually, he adds, part of what keeps ‘normal science’ going and makes it able to proceed in the manner it does; scientists are skeptical people, and if scientists were to question the basic assumptions of the field they’re working in all the time, they’d never be able to specialize in the way they do, exploring all the nooks and crannies; they’d be spending all their time arguing about the basics instead. It should be noted that crises don’t always lead to a resolution; sometimes the crisis can be resolved without it. He also argues that sometimes a revolution can take place without a major crisis, though the existence of such crises he seems to think important to his overall thesis. Crises and revolutions need not be the result of annoying data that does not fit – they may also be the result of e.g. technological advances, like the development of new tools and technology which can e.g. enable scientists to see things they did not use to be able to see. Sometimes the theory upon which a new paradigm is based was presented much earlier, during the ‘normal science’ phase, but nobody took the theory seriously back then because the problems that lead to crisis had not really manifested at that time.
Scientists make progress when they’re doing normal science, in the sense that they tend to learn a lot of new stuff about the world during these phases. But revolutions can both overturn some of that progress (‘that was not the right way to think about these things’), and it can lead to further progress and new knowledge. An important thing to note here is that how paradigms change is in part a sociological process; part of what leads to change is the popularity of different models. Kuhn argues that scientists tend to prefer new paradigms which solves many of the same problems the old paradigm did, as well as some of those troublesome problems which lead to the crisis – so it’s not like revolutions will necessarily lead people back to square one, with all the scientific progress made during the preceding ‘normal science’ period wiped out. But there are some problems. Textbooks, Kuhn argues, are written by the winners (i.e. the people who picked the right paradigm and get to write textbooks), and so they will often deliberately and systematically downplay the differences between the scientists working in the field now and the scientists working in the field – or what came before it (the fact that normal science is conducted at all is a sign of maturity of a field, Kuhn notes) – in the past, painting a picture of gradual, cumulative progress in the field (gigantum humeris insidentes) which perhaps is not the right way to think about what has actually happened. Sometimes a revolution will make scientists stop asking questions they used to ask, without any answer being provided by the new paradigm; there are costs as well as benefits associated with the dramatical change that takes place during scientific revolutions:
“In the process the community will sustain losses. Often some old problems must be banished. Frequently, in addition, revolution narrows the scope of the community’s professional concerns, increases the extent of its specialization, and attenuates its communication with other groups, both scientific and lay. Though science surely grows in depth, it may not grow in breadth as well. If it does so, that breadth is manifest mainly in the proliferation of scientific specialties, not in the scope of any single specialty alone. Yet despite these and other losses to the individual communities, the nature of such communities provides a virtual guarantee that both the list of problems solved by science and the precision of individual problem-solutions will grow and grow. At least, the nature of the community provides such a guarantee if there is any way at all in which it can be provided. What better criterion than the decision of the scientific group could there be?”
I quote this part also to focus in on an area where I am in disagreement with Kuhn – this relates to his implicit assumption that scientific paradigms (whatever that term may mean) are decided by scientists alone. Certainly this is not the case to the extent that the scientific paradigms equal the rules of the game for conducting science. This is actually one of several major problems I have with the model. Doing science requires money, and people who pay for the stuff will have their own ideas about what you can get away with asking questions about. What the people paying for the stuff have allowed scientists to investigate has changed over time, but some things have changed more than others and what might be termed ‘the broader cultural dimension’ seems important to me; those variables may play a very important role in deciding where science and scientists may or may not go, and although the book deals with sociological stuff in quite a bit of detail, the exclusion of broader cultural and political factors in the model is ‘a bit’ of a problem to me. Scientists are certainly not unconstrained today by such ‘external factors’, and/but most scientists alive today will not face anywhere near the same kinds of constraints on their research as their forebears living 300 years ago did – religion is but one of several elephants in the room (and that one is still really important in some parts of the world, though the role it plays may have changed).
Another big problem is how to test a model like this. Kuhn doesn’t try. He only talks about anecdotes; specific instances, examples which according to him illustrates a broader point. I’m not sure his model is completely stupid, but there are alternative ways to think about these things, including mental models with variables omitted from his model which likely lead to a better appreciation of the processes involved. Money and politics, culture/religion, coalition building and the dynamics of negotiation, things like that. How do institutions fit into all of this? These things have very important effects on how science is conducted, and the (near-)exclusion of them in a model of how to conceptualize the scientific process at least somewhat inspired by sociology and related stuff seems more than a bit odd to me. I’m also not completely clear on why this model is even useful, what it adds. You can presumably approximate pretty much any developmental process by some punctuated equilibrium model like this – it seems to me to be a bit like doing a Taylor expansion, if you add enough terms it’ll look plausible, especially if you add ‘crises’ as well to the model to explain the cases where no clear trend is observable. Stable development is normal science, discontinuities are revolutions, high-variance areas are crises; framed that way you suddenly realize that it’s very convenient indeed for Kuhn that crises don’t always lead to revolutions and that revolutions need not be preceded by crises – if those requirements were imposed on the other hand, the underling data-generating-process would at least be somewhat constrained by the model (though how to actually measure ‘progress’ and ‘variance’ are still questions in need of an answer). I know that the model outlined would not explain a set of completely randomly generated numbers, but in this context I think it would do quite well – even if it’s arguable if it has actually explained anything at all. Add to the model imprecise language – 22 definitions… – and the observation that the model builder seems to be cherry-picking examples to make specific points, what you end up with is, well…
The book was sort of interesting, but, yeah… I feel slightly tempted to revise my goodreads review after having written this post, but I’m not sure I will – it was worth reading the book and I probably should have done it a long time ago, even if only to learn what all the fuss was about (it’s my impression, which may be faulty, that this one is (‘considered to be’) one of the must-reads in this genre). Some of the hypotheses derived from the model seem perhaps to be more testable than others (‘young people are more likely to spark important development in a field’), but even in those cases things get messy (‘what do you mean by ‘important’ and who is to decide that? ‘how young?’). A problem with the model which I have not yet mentioned is incidentally that his model of how interactions between fields and the scientists in those fields take place and proceed to me seems to leave a lot to be desired; the model is very ‘field-centric’. How different fields (which are not about to combine into one), and the people working in them, interact with each other may be yet another very important variable not explored in the model.
As a historical narrative about a few specific important scientific events in the past, Kuhn’s account probably isn’t bad (and it has some interesting observations related to the history of science which I did not know). As ‘a general model of how science works’, well…
This will be my last post about the book. Go here for a background post and my overall impression of the book – I’ll limit this post to coverage of the ‘Simple Models of Complex Phenomena’-chapter which I mentioned in that post, as well as a few observations from the introduction to part 5 of the book, which talks a little bit about what the chapter is about in general terms. The stuff they write in the chapter is in a way a sort of overview over the kind of approach to things which you may well end up adopting unconsciously if you’re working in a field like economics or ecology and a defence of such an approach; I’ve as mentioned in the previous post about the book talked about these sorts of things before, but there’s some new stuff in here as well. The chapter is written in the context of Boyd and Richerson’s coverage of their ‘Darwinian approach to evolution’, but many of the observations here are of a much more general nature and relate to the application of statistical and mathematical modelling in a much broader context; and some of those observations that do not directly relate to broader contexts still do as far as I can see have what might be termed ‘generalized analogues’. The chapter coverage was actually interesting enough for me to seriously consider reading a book or two on these topics (books such as this one), despite the amount of work I know may well be required to deal with a book like this.
I exclude a lot of stuff from the chapter in this post, and there are a lot of other good chapters in the book. Again, you should read this book.
Here’s the stuff from the introduction:
“Chapter 19 is directed at those in the social sciences unfamiliar with a style of deploying mathematical models that is second nature to economists, evolutionary biologists, engineers, and others. Much science in many disciplines consists of a toolkit of very simple mathematical models. To many not familiar with the subtle art of the simple model, such formal exercises have two seemingly deadly ﬂaws. First, they are not easy to follow. […] Second, motivation to follow the math is often wanting because the model is so cartoonishly simple relative to the real world being analyzed. Critics often level the charge ‘‘reductionism’’ with what they take to be devastating effect. The modeler’s reply is that these two criticisms actually point in opposite directions and sum to nothing. True, the model is quite simple relative to reality, but even so, the analysis is difﬁcult. The real lesson is that complex phenomena like culture require a humble approach. We have to bite off tiny bits of reality to analyze and build up a more global knowledge step by patient step. […] Simple models, simple experiments, and simple observational programs are the best the human mind can do in the face of the awesome complexity of nature. The alternatives to simple models are either complex models or verbal descriptions and analysis. Complex models are sometimes useful for their predictive power, but they have the vice of being difﬁcult or impossible to understand. The heuristic value of simple models in schooling our intuition about natural processes is exceedingly important, even when their predictive power is limited. […] Unaided verbal reasoning can be unreliable […] The lesson, we think, is that all serious students of human behavior need to know enough math to at least appreciate the contributions simple mathematical models make to the understanding of complex phenomena. The idea that social scientists need less math than biologists or other natural scientists is completely mistaken.”
And below I’ve posted the chapter coverage:
“A great deal of the progress in evolutionary biology has resulted from the deployment of relatively simple theoretical models. Staddon’s, Smith’s, and Maynard Smith’s contributions illustrate this point. Despite their success, simple models have been subjected to a steady stream of criticism. The complexity of real social and biological phenomena is compared to the toylike quality of the simple models used to analyze them and their users charged with unwarranted reductionism or plain simplemindedness.
This critique is intuitively appealing—complex phenomena would seem to require complex theories to understand them—but misleading. In this chapter we argue that the study of complex, diverse phenomena like organic evolution requires complex, multilevel theories but that such theories are best built from toolkits made up of a diverse collection of simple models. Because individual models in the toolkit are designed to provide insight into only selected aspects of the more complex whole, they are necessarily incomplete. Nevertheless, students of complex phenomena aim for a reasonably complete theory by studying many related simple models. The neo-Darwinian theory of evolution provides a good example: ﬁtness-optimizing models, one and multiple locus genetic models, and quantitative genetic models all emphasize certain details of the evolutionary process at the expense of others. While any given model is simple, the theory as a whole is much more comprehensive than any one of them.”
“In the last few years, a number of scholars have attempted to understand the processes of cultural evolution in Darwinian terms […] The idea that uniﬁes all this work is that social learning or cultural transmission can be modeled as a system of inheritance; to understand the macroscopic patterns of cultural change we must understand the microscopic processes that increase the frequency of some culturally transmitted variants and reduce the frequency of others. Put another way, to understand cultural evolution we must account for all of the processes by which cultural variation is transmitted and modiﬁed. This is the essence of the Darwinian approach to evolution.”
“In the face of the complexity of evolutionary processes, the appropriate strategy may seem obvious: to be useful, models must be realistic; they should incorporate all factors that scientists studying the phenomena know to be important. This reasoning is certainly plausible, and many scientists, particularly in economics […] and ecology […], have constructed such models, despite their complexity. On this view, simple models are primitive, things to be replaced as our sophistication about evolution grows. Nevertheless, theorists in such disciplines as evolutionary biology and economics stubbornly continue to use simple models even though improvements in empirical knowledge, analytical mathematics, and computing now enable them to create extremely elaborate models if they care to do so. Theorists of this persuasion eschew more detailed models because (1) they are hard to understand, (2) they are difﬁcult to analyze, and (3) they are often no more useful for prediction than simple models. […] Detailed models usually require very large amounts of data to determine the various parameter values in the model. Such data are rarely available. Moreover, small inaccuracies or errors in the formulation of the model can produce quite erroneous predictions. The temptation is to ‘‘tune’’ the model, making small changes, perhaps well within the error of available data, so that the model produces reasonable answers. When this is done, any predictive power that the model might have is due more to statistical ﬁtting than to the fact that it accurately represents actual causal processes. It is easy to make large sacriﬁces of understanding for small gains in predictive power.”
“In the face of these difﬁculties, the most useful strategy will usually be to build a variety of simple models that can be completely understood but that still capture the important properties of the processes of interest. Liebenstein (1976: ch. 2) calls such simple models ‘‘sample theories.’’ Students of complex and diverse subject matters develop a large body of models from which ‘‘samples’’ can be drawn for the purpose at hand. Useful sample theories result from attempts to satisfy two competing desiderata: they should be simple enough to be clearly and completely grasped, and at the same time they should reﬂect how real processes actually do work, at least to some approximation. A systematically constructed population of sample theories and combinations of them constitutes the theory of how the whole complex process works. […] If they are well designed, they are like good caricatures, capturing a few essential features of the problem in a recognizable but stylized manner and with no attempt to represent features not of immediate interest. […] The user attempts to discover ‘‘robust’’ results, conclusions that are at least qualitatively correct, at least for some range of situations, despite the complexity and diversity of the phenomena they attempt to describe. […] Note that simple models can often be tested for their scientiﬁc content via their predictions even when the situation is too complicated to make practical predictions. Experimental or statistical controls often make it possible to expose the variation due to the processes modeled, against the background of ‘‘noise’’ due to other ones, thus allowing a ceteris paribus prediction for purposes of empirical testing.”
“Generalized sample theories are an important subset of the simple sample theories used to understand complex, diverse problems. They are designed to capture the qualitative properties of the whole class of processes that they are used to represent, while more specialized ones are used for closer approximations to narrower classes of cases. […] One might agree with the case for a diverse toolkit of simple models but still doubt the utility of generalized sample theories. Fitness-maximizing calculations are often used as a simple caricature of how selection ought to work most of the time in most organisms to produce adaptations. Does such a generalized sample theory have any serious scientiﬁc purpose? Some might argue that their qualitative kind of understanding is, at best, useful for giving nonspecialists a simpliﬁed overview of complicated topics and that real scientiﬁc progress still occurs entirely in the construction of specialized sample theories that actually predict. A sterner critic might characterize the attempt to construct generalized models as loose speculation that actually inhibits the real work of discovering predictable relationships in particular systems. These kinds of objections implicitly assume that it is possible to do science without any kind of general model. All scientists have mental models of the world. The part of the model that deals with their disciplinary specialty is more detailed than the parts that represent related areas of science. Many aspects of a scientist’s mental model are likely to be vague and never expressed. The real choice is between an intuitive, perhaps covert, general theory and an explicit, often mathematical, one. […] To insist upon empirical science in the style of physics is to insist upon the impossible. However, to give up on empirical tests and prediction would be to abandon science and retreat to speculative philosophy. Generalized sample theories normally make only limited qualitative predictions. The logistic model of population growth is a good elementary example. At best, it is an accurate model only of microbial growth in the laboratory. However, it captures something of the biology of population growth in more complex cases. Moreover, its simplicity makes it a handy general model to incorporate into models that must also represent other processes such as selection, and intra- and interspeciﬁc competition. If some sample theory is consistently at variance with the data, then it must be modiﬁed. The accumulation of these kinds of modiﬁcations can eventually alter general theory […] A generalized model is useful so long as its predictions are qualitatively correct, roughly conforming to the majority of cases. It is helpful if the inevitable limits of the model are understood. It is not necessarily an embarrassment if more than one alternative formulation of a general theory, built from different sample models, is more or less equally correct. In this case, the comparison of theories that are empirically equivalent makes clearer what is at stake in scientiﬁc controversies and may suggest empirical and theoretical steps toward a resolution.”
“The thorough study of simple models includes pressing them to their extreme limits. This is especially useful at the second step of development, where simple models of basic processes are combined into a candidate generalized model of an interesting question. There are two related purposes in this exercise. First, it is helpful to have all the implications of a given simple model exposed for comparative purposes, if nothing else. A well-understood simple sample theory serves as a useful point of comparison for the results of more complex alternatives, even when some conclusions are utterly ridiculous. Second, models do not usually just fail; they fail for particular reasons that are often very informative. Just what kinds of modiﬁcations are required to make the initially ridiculous results more nearly reasonable? […] The exhaustive analysis of many sample models in various combinations is also the main means of seeking robust results (Wimsatt, 1981). One way to gain conﬁdence in simple models is to build several models embodying different characterizations of the problem of interest and different simplifying assumptions. If the results of a model are robust, the same qualitative results ought to obtain for a whole family of related models in which the supposedly extraneous details differ. […] Similarly, as more complex considerations are introduced into the family of models, simple model results can be considered robust only if it seems that the qualitative conclusion holds for some reasonable range of plausible conditions.”
“A plausibility argument is a hypothetical explanation having three features in common with a traditional hypothesis: (1) a claim of deductive soundness, of in-principle logical sufﬁciency to explain a body of data; (2) sufﬁcient support from the existing body of empirical data to suggest that it might actually be able to explain a body of data as well as or better than competing plausibility arguments; and (3) a program of research that might distinguish between the claims of competing plausibility arguments. The differences are that competing plausibility arguments (1) are seldom mutually exclusive, (2) can seldom be rejected by a single sharp experimental test (or small set of them), and (3) often end up being revised, limited in their generality or domain of applicability, or combined with competing arguments rather than being rejected. In other words, competing plausibility arguments are based on the claims that a different set of submodels is needed to achieve a given degree of realism and generality, that different parameter values of common submodels are required, or that a given model is correct as far as it goes, but applies with less generality, realism, or predictive power than its proponents claim. […] Human sociobiology provides a good example of a plausibility argument. The basic premise of human sociobiology is that ﬁtness-optimizing models drawn from evolutionary biology can be used to understand human behavior. […] We think that the clearest way to address the controversial questions raised by competing plausibility arguments is to try to formulate models with parameters such that for some values of the critical parameters the results approximate one of the polar positions in such debates, while for others the model approximates the other position.”
“A well-developed plausibility argument differs sharply from another common type of argument that we call a programmatic claim. Most generally, a programmatic claim advocates a plan of research for addressing some outstanding problem without, however, attempting to construct a full plausibility argument. […] An attack on an existing, often widely accepted, plausibility argument on the grounds that the plausibility argument is incomplete is a kind of programmatic claim. Critiques of human sociobiology are commonly of this type. […] The criticism of human sociobiology has far too frequently depended on mere programmatic claims (often invalid ones at that, as when sociobiologists are said to ignore the importance of culture and to depend on genetic variation to explain human differences). These claims are generally accompanied by dubious burden-of-proof arguments. […] We have argued that theory about complex-diverse phenomena is necessarily made up of simple models that omit many details of the phenomena under study. It is very easy to criticize theory of this kind on the grounds that it is incomplete (or defend it on the grounds that it one day will be much more complete). Such criticism and defense is not really very useful because all such models are incomplete in many ways and may be ﬂawed because of it. What is required is a plausibility argument that shows that some factor that is omitted could be sufﬁciently important to require inclusion in the theory of the phenomenon under consideration, or a plausible case that it really can be neglected for most purposes. […] It seems to us that until very recently, ‘‘nature-nurture’’ debates have been badly confused because plausibility arguments have often been taken to have been successfully countered by programmatic claims. It has proved relatively easy to construct reasonable and increasingly sophisticated Darwinian plausibility arguments about human behavior from the prevailing general theory. It is also relatively easy to spot the programmatic ﬂaws in such arguments […] The problem is that programmatic objections have not been taken to imply a promise to deliver a full plausibility claim. Rather, they have been taken as a kind of declaration of independence of the social sciences from biology. Having shown that the biological theory is in principle incomplete, the conclusion is drawn that it can safely be ignored.”
“Scientists should be encouraged to take a sophisticated attitude toward empirical testing of plausibility arguments […] Folk Popperism among scientists has had the very desirable result of reducing the amount of theory-free descriptive empiricism in many complex-diverse disciplines, but it has had the undesirable effect of encouraging a search for simple mutually exclusive hypotheses that can be accepted or rejected by single experiments. By our argument, very few important problems in evolutionary biology or the social sciences can be resolved in this way. Rather, individual empirical investigations should be viewed as weighing marginally for or against plausibility arguments. Often, empirical studies may themselves discover or suggest new plausibility arguments or reconcile old ones.”
“We suspect that most evolutionary biologists and philosophers of biology on both sides of the dispute would pretty much agree with the defense of the simple models strategy presented here. To reject the strategy of building evolutionary theory from collections of simple models is to embrace a kind of scientiﬁc nihilism in which there is no hope of achieving an understanding of how evolution works. On the other hand, there is reason to treat any given model skeptically. […] It may be possible to defend the proposition that the complexity and diversity of evolutionary phenomena make any scientiﬁc understanding of evolutionary processes impossible. Or, even if we can obtain a satisfactory understanding of particular cases of evolution, any attempt at a general, uniﬁed theory may be impossible. Some critics of adaptationism seem to invoke these arguments against adaptationism without fully embracing them. The problem is that alternatives to adaptationism must face the same problem of diversity and complexity that Darwinians use the simple model strategy to ﬁnesse. The critics, when they come to construct plausibility arguments, will also have to use relatively simple models that are vulnerable to the same attack. If there is a vulgar sociobiology, there is also a vulgar criticism of sociobiology.”
I finished the book.
I did not have a lot of nice things to say about the second half of it on goodreads. I felt it was a bad idea to blog the book right after I’d finished it (I occasionally do this) because I was actually feeling angry at the author at that point, and I hope that after having now distanced myself a bit from it perhaps I’m now better able to evaluate the book.
The author is a classics professor writing about science. I must say that at this point I have now had some bad experiences with reading authors with backgrounds in the humanities writing about science and scientific history – reading this book at one point reminded me of the experience I had reading the Engelhardt & Jensen book. It also reminded me of this comic – I briefly had a ‘hmmmmm…. – Is the reason why I have a hard time following some of this stuff the simple one that the author is a fool who doesn’t know what he’s talking about?‘-experience. It’s probably not fair to judge the book as harshly as I did in my goodreads review (or to link to that comic), and this guy is a hell of a lot smarter than Engelhardt and Jensen are (which should not surprise you – classicists are smart), but I frankly felt during the second half of this work that the author was wasting my time and I get angry when people do that. He spends inordinate amounts of time discussing trivial points which to me seem only marginally related to the topic at hand – he’d argue they’re not ‘marginally related’ of course, but I’d argue that that’s at least in part because he’s picked the wrong title for his book (see also the review to which I linked in the previous post). There’s a lot of stuff in the second half about things like historiography and ontology, discussions about the proper truth concept to apply in this setting and things like that. Somewhat technical stuff, but certainly readable. I feel he’s spending lots of words and time on trivial and irrelevant points, and there are a couple of chapters where I’ve basically engaged in extensive fisking in the margin of the book. I don’t really want to cover all that stuff here.
I’ve added some observations from the second half of the book below, as well as some critical remarks. I’ve tried in this post to limit my coverage to the reasonably good stuff in there; if you get a good impression of the book based on the material included in this post I have to caution you that I did not think the book was very good. If you want to read the book because you’re curious to know more about ‘the wisdom of the ancients’, I’ll remind you that on the topic of science at least there simply is no such thing:
“Science is special because there is no ancient wisdom. The ancients were fools, by and large. I mean no disrespect, but if you wish to design a rifle by Aristotelian principles, or treat an illness via the Galenic system, you are a fool, following foolishness.”
Lehoux would, I am sure, disagree somewhat with that assessment (that the ancients were fools), in that he argues throughout the book that the ancients actually often could be argued to be reasonably justified in believing many of the things that they did. I’m not sure to which extent I agree with that assessment, but the argument he makes is not without some merit.
“That magnets attract because of sympathy had long been, and would long continue to be, the standard explanation for their efficacy. That they can be impeded by garlic is brought in to complete the pairing of forces, since strongly sympathetic things are generally also strongly antipathetic with respect to other objects. […] in both Plutarch and Ptolemy, garlic-magnets are being invoked as a familiar example to fill out the range of the powers of the two forces. Sympathy and antipathy, the author is saying, are common — just look at all the examples […] goat’s blood as an active substance is another trope of the sympathy-antipathy argument. […] washing the magnet in goat’s blood, a substance antipathetic to the kind of thing that robs magnets of their power, negates the original antipathetic power of the garlic, and so restores the magnets. […] we should remember that — even for the eccentric empiricist — the test only becomes necessary under the artificial conditions I have created in this chapter. We know the falsity of garlic-magnets so immediately that no test [feels necessary] […] We know exactly where the disproof lies — in experience — and we know that so powerfully as to simply leave it at that. The proof that it is false is empirical. It may be a strange kind of empirical argument that never needs to come to the lab, but it is still empirical for all that. On careful analysis we can argue that this empiricism is indirect […] Our experiences of magnets, and our experiences of garlic, are quietly but very firmly mediated by our understanding of magnets and our understanding of garlic, just as Plutarch’s experiences of those things were mediated by his own understandings. But this is exactly where we hit the big epistemological snag: our argument against the garlic-magnet antipathy is no stronger, and more importantly no more or less empirical, than Plutarch’s argument for it. […]
None of the experience claims in this chapter are disingenuous. Neither we nor Plutarch are avoiding a crucial test out of fear, credulity, or duplicity. We simply don’t need to get our hands dirty. This is in part because the idea of the test becomes problematized only when we realize that there are conflicting claims resting on identical evidential bases — only then does a crucial test even suggest itself. Otherwise, we simply have an epistemological blind spot. At the same time, we recognize (as Plutarch did) how useful and reliable our classification systems are, and so even as the challenge is raised, we remain pretty confident, deep down, about what would happen to the magnet in our kitchen. The generalized appeal to experience has a lot of force, and it still has the power to trick us into thinking that the so-called “empirically obvious” is more properly empirical than it is just obvious. […]
An important part of the point of this chapter is methodological. I have taken as my starting point a question put best by Bas van Fraassen: “Is there any rational way I could come to entertain, seriously, the belief that things are some way that I now classify as absurd?” I have then tried to frame a way of understanding how we can deal with the many apparently — or even transparently — ridiculous claims of premodern science, and it is this: We should take them seriously at face value (within their own contexts). Indeed, they have the exact same epistemological foundations as many of our own beliefs about how the world works (within our own context).”
“On the ancient understanding, astrology covers a lot more ground than a modern newspaper horoscope does. It can account for everything from an individual’s personality quirks and dispositions to large-scale political and social events, to racial characteristics, crop yields, plagues, storms, and earthquakes. Its predictive and explanatory ranges include some of what is covered by the modern disciplines of psychology, economics, sociology, medicine, meteorology, biology, epidemiology, seismology, and more. […] Ancient astrology […] aspires to be […] personal, precise, and specific. It often claims that it can tell someone exactly what they are going to do, when they are going to do it, and why. It is a very powerful tool indeed. So powerful, in fact, that astrology may not leave people much room to make what they would see as their own decisions. On a strong reading of the power of the stars over human affairs, it may be the case that individuals do not have what could be considered to be free will. Accordingly, a strict determinism seems to have been associated quite commonly with astrology in antiquity.”
“Seneca […] cites the multiplicity of astrological causes as leading to uncertainty about the future and inaccuracy of prediction. Where opponents of astrology were fond of parading famous mistaken predictions, Seneca preempts that move by admitting that mistakes not only can be made, but must sometimes be made. However, these are mistakes of interpretation only, and this raises an important point: we may not have complete predictive command of all the myriad effects of the stars and their combinations, but the effects are there nonetheless. Where in Ptolemy and Pliny the effects were moderated by external (i.e., nonastrological) causes, Seneca is saying that the internal effects are all-important, but impossible to control exhaustively. […] Astrology is, in the ancient discourses, both highly rational and eminently empirical. It is surprising how much evidence there was for it, and how well it sustained itself in the face of objections […] Defenders of astrology often wielded formidable arguments that need to be taken very seriously if we are to fully understand the roles of astrology in the worlds in which it operates. The fact is that most ancient thinkers who talk about it seem to think that astrology really did work, and this for very good reasons.” [Lehoux goes into a lot of detail about this stuff, but I decided against covering it in too much detail here.]
I did not have a lot of problems with the stuff covered so far, but this point in the coverage is where I start getting annoyed at the author, so I won’t cover much more of it. Here’s an example of the kind of stuff he covers in the later chapters:
“The pessimistic induction has many minor variants in its exact wording, but all accounts are agreed on the basic argument: if you look at the history of the sciences, you find many instances of successful theories that turn out to have been completely wrong. This means that the success of our current scientific theories is no grounds for supposing that those theories are right. […]
In induction, examples are collected to prove a general point, and in this case we conclude, from the fact that wrong theories have often been successful in the past, that our own successful theories may well be wrong too.”
He talks a lot about this kind of stuff in the book. Stuff like this as well. Not much in those parts about what the Romans knew, aside from reiteration and contextualization of stuff covered earlier on. A problem he’s concerned with and presumably one of the factors which motivated him to writing the book is how we might convince ourselves that our models of the world are better than those of the ancients, who also thought they had a pretty good idea about what was going on in the world – he argues this is very difficult. He also talks about Kuhn and stuff like that. As mentioned I don’t want to cover the stuff from the book I don’t like in too much detail here, and I added the quotes in the two paragraphs above mostly because they marginally relate to a point (a few points?) that I felt compelled to include here in the coverage because this stuff is important to me to underscore, on account at least in part of the fact that the author seems to be completely oblivious about it:
Science should in my opinion be full of people making mistakes and getting things wrong. This is not a condition to be avoided, this is a desirable state of affairs.
This is because scientists should be proven wrong when they are wrong. And it is because scientists should risk being proven wrong. Looking for errors, problems, mistakes – this is part of the job description.
The fact that scientists are proven wrong is not a problem, it is a consequence of the fact that scientific discovery is taking place. When scientists find out that they’ve been wrong about something, this is good news. It means we’ve learned something we didn’t know.
This line of thinking seems from my reading of Lehoux to be unfamiliar to him – the desirability of discovering the ways we’re wrong doesn’t really seem to enter the picture. Somehow Lehoux seems to think that the fact that scientists may be proven wrong later on is an argument which should make us feel less secure about our models of the world. I think this is a very wrongheaded way to think about these things, and I’d actually if anything argue the opposite – precisely because our theories might be proven wrong we have reason to feel secure in our convictions, because theories which can be proven wrong contain more relevant information about the world (‘are better’) than theories which can’t, and because theories which might in principle be proven wrong but have not yet been proven wrong despite our best attempts should be placed pretty high up there in the hierarchy of beliefs. We should feel far less secure in our convictions if there were no risk they might be proven wrong.
Without errors being continually identified and mistakes corrected we’re not learning anything new, and science is all about learning new things about the world. Science shouldn’t be thought of as being about building some big fancy building and protecting it against attacks at all costs, walking around hoping we got everything just right and that there’ll be no problems with water in the basement. Philosophers of science and historians of science in my limited experience seem often to subscribe to a model like that, implicitly, presumably in part due to the methodological differences between philosophy and science – they often seem to want to talk about the risk of getting water in the basement. I think it’s much better to not worry too much about that and instead think about science in terms of unsophisticated cavemen walking around with big clubs or hammers, smashing them repeatedly into the walls of the buildings and observing which parts remain standing, in order to figure out which building materials manage the continual assaults best.
Lastly just to reiterate: Despite being occasionally interesting this book is not worth your time.
I’ve posted a few of Carolin Crawford’s astronomy lectures before – in this post I’ve added a few more:
If you want a more detailed account, Rory Barnes’ Formation and Evolution of Exoplanets is probably a good try, even though it’s a couple of years old and things are – as Crawford points out – changing rather fast in this field of exploration. I read the first couple of chapters of that book a while back and browsed a few of the other chapters a bit later on, but I decided against finishing it because it was too much work – the mathematics gets a bit ugly along the way, and if you don’t happen to have a rather strong foundation in physics and(/or?) maths it’s probably not worth your time as you’ll not understand much of what’s going on.
It’s sometimes a bit annoying that you can’t tell what she’s pointing at when she’s explaining what going on in a given picture or illustration (I find that this is a very common problem when it comes to online lectures, and it’s also sometimes an issue during the other lectures in this series), but it’s still a great lecture.
This one is actually the most recent one I’ve watched, even though it turns out it’s her first Gresham lecture. The sound quality of this lecture is a bit worse than that of the ones above, especially during the first minutes (perhaps I just got used to it? I don’t know…) but it’s pretty awesome anyway:
I think Crawford’s doing a splendid job and that’s she’s given some very interesting and educational videos. Please don’t skip/ignore these videos just because they’re somewhat longer than ‘the standard youtube video‘ – there’s some really awesome stuff here (the same thing applies, I think, to the various medical lectures I’ve posted recently as well – you can go back to those posts now and have a look if you skipped them the first time around; they’re all still there…). Wikipedia incidentally has great coverage of many astronomy-related topics and I’m sure (because I’ve read some of them before, e.g. the article about Enceladus) that there are some featured articles about stuff covered in these lectures waiting for you if you want to learn more. You don’t need to start at the Enceladus article if you want to learn more about Saturn’s moons – a better place to start would probably be this article.
As Razib Khan put it recently, this is truly a golden age of the mind, if you want it. As some of the readers who read my most recent post (I pulled it later, and there wasn’t much to read, really – so the rest of you didn’t miss out on anything..) might have inferred, I often have doubts about if this will keep being ‘enough’ for me, for some rather narrow definitions of ‘enough’ – but I should point out that I do derive (…/and so it is possible to derive…) a great deal of pleasure from living in an age where you at least in theory (but to a greater and greater extent also in practise) have the option of exploring and learning (/trying to learn..) stuff about almost any topic you’d care to have a go at. Even though I from time to time find myself depressed on account of wanting/desiring much more from life than what such a life of the mind on its own can possibly give me, I do think that most people do not take enough advantage of the opportunities they have today in this area of life.
“Previous studies have shown that estimations of the calorie content of an unhealthy main meal food tend to be lower when the food is shown alongside a healthy item (e.g. fruit or vegetables) than when shown alone. This effect has been called the negative calorie illusion and has been attributed to averaging the unhealthy (vice) and healthy (virtue) foods leading to increased perceived healthiness and reduced calorie estimates. The current study aimed to replicate and extend these findings to test the hypothesized mediating effect of ratings of healthiness of foods on calorie estimates. […] The first two studies failed to replicate the negative calorie illusion. In a final study, the use of a reference food, closely following a procedure from a previously published study, did elicit a negative calorie illusion. No evidence was found for a mediating role of healthiness estimates. […] The negative calorie illusion appears to be a function of the contrast between a food being judged and a reference, supporting the hypothesis that the negative calorie illusion arises from the use of a reference-dependent anchoring and adjustment heuristic and not from an ‘averaging’ effect, as initially proposed. This finding is consistent with existing data on sequential calorie estimates, and highlights a significant impact of the order in which foods are viewed on how foods are evaluated.” […]
The basic idea behind the ‘averaging effect’ above is that your calorie estimate depends on how ‘healthy’ you assume the dish to be; the intuition here is that if you see an apple next to an icecream, you may think of the dish as more healthy than if the apple wasn’t there and that might lead to faulty (faultier) estimates of the actual number of calories in the dish (incidentally presumably such an effect is possible to detect even if people correctly infer that the latter dish has more calories than does the former; what’s of interest here is the estimate error, not the actual estimate). These guys have a hard time finding a negative calorie illusion at all (they don’t in the first two studies), and in the case where they do the mechanism is different from the one initially proposed; it seems to them that the story to be told is a story about anchoring effects. I like when replication attempts get published, especially when they fail to replicate – such studies are important. Here are a few more remarks from the study, about ‘real-world implications’:
“Calorie estimates are a simple measure of participant’s perception of foods; however they almost certainly do not reflect actual factual knowledge about a food’s calorie content. It is not currently known whether calorie estimates are related to the expected satiety for a food, or anticipated tastiness. The data from the current studies fail to show that calorie estimates are derived directly from the healthiness ratings of foods. Other studies have shown that calorie estimates are influenced by the restaurant from which a food is purchased , as well as the order in which foods are presented [current study, 11], very much supporting the contextually sensitive nature of calorie estimates. And there is some evidence that erroneous calorie estimates alter portion size selection  and that lower calorie estimates for a main meal item have been shown to alter selection for drinks and side dishes .
Based on the current data, a negative calorie illusion is unlikely to be driving systematic failures in calorie estimations when incidental “healthy foods”, such as fruit and vegetables, are viewed alongside energy dense nutrition poor foods in advertisements or food labels. Foods would need to be viewed in a pre-determined sequence for systematic errors in real-world instances of calorie estimates. A couple of examples when this might occur are when food items are viewed in a meal with courses (starter, main, dessert) or when foods are seen in a specified order as they are positioned on a food menu or within the pathway around a supermarket from the entrance to the checkout tills.”
iii. You can read some pages from Popper’s Conjectures and Refutations here. A few quotes:
“I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory. Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ʺun‐analysedʺ and crying aloud for treatment.
The most characteristic element in this situation seemed to me the incessant stream of confirmations, of observations which ʺverifiedʺ the theories in question; and this point was constantly emphasized by their adherents. A Marxist could not open a newspaper without finding on every page confirming evidence for his interpretation of history; not only in the news, but also in its presentation ‐ which revealed the class bias of the paper ‐ and especially of course in what the paper did not say. The Freudian analysts emphasized that their theories were constantly verified by their ʺclinical observations.” […] I could not think of any human behaviour which could not be interpreted in terms of either theory [Freud or Adler]. It was precisely this fact ‐ that they always fitted, that they were always confirmed ‐ which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness. […]
These considerations led me in the winter of 1919 ‐ 20 to conclusions which I may now reformulate as follows.
(1) It is easy to obtain confirmations, or verifications, for nearly every theory ‐ if we look for confirmations.
(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory ‐ an event which would have refuted the theory.
(3) Every ʺgoodʺ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of theory (as people often think) but a vice.
(5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability; some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
(6) Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ʺcorroborating evidence.ʺ)
(7) Some genuinely testable theories, when found to be false, are still upheld by their admirers ‐ for example by introducing ad hoc some auxiliary assumption, or by re‐interpreting theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ʺconventionalist twistʺ or a ʺconventionalist stratagem.ʺ)
One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.”
iv. A couple of physics videos:
“Global eradication of polio has been the ultimate game of Whack-a-Mole for the past decade; when it seems the virus has been beaten into submission in a final refuge, up it pops in a new region. Now, as vanquishing polio worldwide appears again within reach, another insidious threat may be in store from infection sources hidden in plain view.
Polio’s latest redoubts are “chronic excreters,” people with compromised immune systems who, having swallowed weakened polioviruses in an oral vaccine as children, generate and shed live viruses from their intestines and upper respiratory tracts for years. Healthy children react to the vaccine by developing antibodies that shut down viral replication, thus gaining immunity to infection. But chronic excreters cannot quite complete that process and instead churn out a steady supply of viruses. The oral vaccine’s weakened viruses can mutate and regain wild polio’s hallmark ability to paralyze the people it infects. After coming into wider awareness in the mid-1990s, the condition shocked researchers. […] Chronic excreters are generally only discovered when they develop polio after years of surreptitiously spreading the virus.”
Wikipedia incidentally has a featured article about Poliomyelitis here.
First, here’s a link. Some quotes below, some comments in the last part of the post:
“I refer to an account of causal order based on Simon’s seminal analysis as the structural account.2 It is structural in the sense that what matters for determining the causal order is the relationship among the parameters and the variables and among the variables themselves. The parameterization – that is, the identification of privileged set of parameters that govern the functional relationships – is the source of the causal asymmetries that define the causal order. The idea of a privilege parameterization can be made more precise, by noting that a set of parameters is privileged when its members are, in the terminology of the econometricians, variation-free. A parameter is variation-free if, and only if, the fact that other parameters take some particular values in their ranges does not restrict the range of admissible values for that parameter.
Defining parameters as variation-free variables has a similar flavor to Hans Reichenbach’s (1956) Principle of the Common Cause: any genuine correlation among variables has a causal explanation – either one causes the other, they are mutual causes, or they have a common cause. Since we represent causal connections as obtaining only between variables simpliciter,we insist that parameters not display any mutual constraints. […] the variation-freeness of parameters is only a representational convention. Any situation in which it appears that putative parameters are mutually constraining can always be rewritten so that the constraints are moved into the functional forms that connect variables to each other.” […]
“John Anderson’s (1938, p. 128) notion of a causal field is helpful (see also Mackie 1980, p. 35; Hoover 2001, pp. 41–49). The causal field consists of background conditions that, for analytical or pragmatic reasons, we would like to set aside in order to focus on some more salient causal system. We are justified in doing so when, in fact, they do not change or when the changes are causally irrelevant. In terms of representation within the structural account, setting aside causes amounts to fixing certain parameters to constant values. The effect is not unlike Pearl’s or Woodward’s wiping out of a causal arrow, though somewhat more delicate. The replacement of a parameter by a constant amounts to absorbing that part of the causal mechanism into the functional form that connects the remaining parameters and variables.” (From chapter 3, ‘Identity, Structure, and Causal Representation in Scientific Models’. This was one of the better chapters).
“Certain substantial idealisations need to be taken also when the RD model [replicator dynamics model, US] is interpreted biologically. A different set of substantial idealisations needs to be taken when the RD model is interpreted socially. By making these different idealisations, we adapt the model for its respective representative uses. This is standard scientific practice: most, and possibly all, model uses involve idealisations. Yet when the same formal structure is employed to construct different, more specific mechanistic models, and each of these models involves different idealisations, one has to be careful when inferring purported similarities between these different mechanisms based on the common formal structure. […] the RD equation is adapted for its respective representative tasks. In the course of each adaptation, certain features of the RD are drawn on – others are accepted as useful or at least harmless idealisations. Which features are drawn on and which are accepted as idealisations differ with each adaptation. The mechanism that each adaptation of the RD represents is substantially different from each other and does not share any or little causal structure between each other.” (From Chapter 5: ‘Models of Mechanisms: The Case of the Replicator Dynamics’).
“Before formulating [a] claim, it is necessary first to clear up some terminology. Leuridan[‘s] definition ignores three traditional distinctions that have brought much-needed clarity to the discussions of laws in the philosophy of science. First, we distinguish laws (metaphysical entities that produce or are responsible for regularities) and law statements (descriptions of laws). If one does not respect this distinction, one runs the risk (as Leuridan does) of unintentionally suggesting that sentences, equations, or models are responsible for the fact that certain stable regularities hold. In like fashion, we distinguish regularities, which are statistical patterns of dependence and independence among magnitudes, from generalizations, which describe regularities. Finally, we distinguish regularities from laws, which produce or otherwise explain the patterns of dependence and independence among magnitudes (or so one might hold). […]
Strict law statements, as Leuridan understands them, are nonvacuous, universally quantified, and exceptionless statements that are unlimited in scope, apply in all times and places, and contain only purely qualitative predicates (2010, p. 318). Noting that few law statements in any science live up to these standards, Leuridan argues that the focus on strict law statements (and presumably also on strict laws) is unhelpful for understanding science. Instead, he focuses on the concept of a pragmatic law (or p-law). Following Sandra Mitchell (1997, 2000, 2003, 2009), Leuridan understands p-law statements as descriptions of stable and strong regularities that can be used to predict, explain, and manipulate phenomena. A regularity is stable in proportion to the range of conditions under which it continues to hold and to the size of the space-time region in which it holds (2010, p. 325). A regularity is strong if it is deterministic or frequent. p-law statements need not satisfy the criteria for strict law statements.” (From chapter 7: ‘Mechanisms and Laws: Clarifying the Debate’)
“This section has illustrated two central points concerning extrapolation. First, it is not necessary that the causal relationship to be extrapolated is the same in the model as in the target. Given knowledge of the probability distributions for the model and target along with the selection diagram, it can be possible to make adjustments to account for differences. Secondly, the conditions needed for extrapolation vary with the type of claim to extrapolated. In general, the more informative the causal claim, the more stringent the background assumptions needed to justify its transfer. This second point is very important for explaining how extrapolation can remain possible even when substantial uncertainty exists about the selection diagram. […]
I should emphasize that the point here is definitely not to insist upon the infirmity of causal inferences grounded in extrapolation and observational data. Uncertainties frequently arise in experiments too, especially those involving human subjects (for instance, due to noncompliance, i.e., the failure of some subjects in the experiment to follow the experimental protocol). Such uncertainties are inherent in any attempts to learn about causation in large complex systems wherein numerous practical and ethical concerns restrict the types of studies that are possible. Consequently, scientific inference in such situations usually must build a cumulative case from a variety of lines of evidence none of which is decisive in isolation. Although that may seem a rather obvious point, it does seem to get overlooked in some critical discussions of extrapolation. […] critiques which observe that extrapolations rarely if ever constitute definitive evidence sail wide of the mark. Building a case based on the coherence of multiple lines of imperfect evidence is the norm for social science and other sciences that study complex systems that are widely diffused across space and time. To insist otherwise is to misconstrue the nature of science and to obstruct applications of scientific knowledge to many pressing real-world problems.” (From chapter 10: ‘Mechanisms and Extrapolation in the Abortion-Crime Controversy’.)
“In 1992, Heckman published a seminal paper containing ‘most of the standard objections’ against randomised experiments in the social sciences. Heckman focused on the non-comparative evaluation of social policy programmes, where randomisation simply decided who would join them (without allocating the rest to a control group). Heckman claimed that even if randomisation allows the experimenters to reduce selection biases, it may produce a different bias. Specifically, experimental subjects might behave differently if joining the programme did not require ‘a lottery’. Randomisation can thus interfere with the decision patterns (the causes of action) presupposed in the programme under evaluation. […] Heckman’s main objection is that randomisation tends to eliminate risk-averse persons. This is only acceptable if risk aversion is an irrelevant trait for the outcome under investigation […] However, even if irrelevant, it compels experimenters to deal with bigger pools of potential participants in order to meet the desired sample size, so the exclusion of risk-averse subjects does not disrupt recruitment. But bigger pools may affect in turn the quality of the experiment, if it implies higher costs. One way or another, argues Heckman, randomisation is not neutral regarding the results of the experiment.” (…known stuff, but I figured I should quote it anyway as it’s unlikely that all readers are familiar with this problem. From chapter 11: ‘Causality, Impartiality and Evidence-Based Policy’. How to deal with the problem? Here’s what they conclude:)
“To sum up, in RFEs [Randomized Field Experiments – US], randomisation may generate a self-selection bias; we can only avoid with a partial or total masking of the allocation procedure. We have argued that this is a viable solution only insofar as the trial participants do not have strong preferences about the trial outcome. If they do, we cannot assume that blinded randomisation will be a control for their preferences unless we test for its success. We will only be able to claim that the trial has been impartial regarding the participants’ preferences if we have a positive proof of them being ignorant of the comparative nature of the experiment. Hence, in RFEs, randomisation is not a strong warrant of impartiality per se: we need to prove in addition that it has been masked successfully.1”
On a general note, I found some of the stuff in this book interesting, but there was some confusing stuff in there as well. I had at least some background knowledge about quite a few of the subjects covered, but a lot of the stuff in the book is written by people with a completely different background (many of the contributors are philosophers of science), and in some chapters I had a hard time ‘translating’ a specific contributor’s (gibberish? It’s not a nice word to use, but I’m tempted to use it here anyway) into stuff related to the science/the real world – I was quite close to walking away from the book while reading chapters 8 and 9, dealing with natural selection and causal processes. I didn’t, but you should most certainly not pick up this book in order to figure out how natural selection ‘actually works’; if that’s your goal, read Dawkins instead. A few times I had an ‘I knew this, but that’s actually an interesting way to think about it’-experience and I generally like having those. As in all books with multiple contributors, there’s some variation in the quality of the material across chapters – and as you might infer from the comments above, I didn’t think very highly of chapters 8 and 9. But there were other chapters as well which also did not really interest me much. I did read it all though.
Overall I’m a little disappointed, but it’s not all bad. I gave it 2 stars on goodreads, and towards the end I moved significantly closer to the 3 star rating than the one star rating. I wouldn’t recommend it though; considering how much you’re likely to get out of this, it’s probably for most people simply too much work – it’s not an easy book to read.
i. I’ve read The Murder of Roger Ackroyd. I’ll say very little about the book here because I don’t want to spoil it in any way – but I do want to say that the book is awesome. I read it in one sitting, and I gave it 5 stars on goodreads (av.: 4,09); I think it’s safe to say it’s one of the best crime novels I’ve ever read (and I’ll remind you again that even though I haven’t read that much crime fiction, I have read some – e.g. every Sherlock Holmes story ever published and every inspector Morse novel written by Colin Dexter). The cleverness of the plot reminded me of a few Asimov novels I read a long time ago. A short while after I’d finished the book I was in the laundry room about to start the washing machine and a big smile spread on my face, I was actually close to laughing – because damn, the book is just so clever, so brilliant!
I highly recommend the book.
ii. I have been watching a few of the videos in the Introduction to Higher Mathematics youtube-series by Bill Shillito, here are a couple of examples:
I’m not super impressed by these videos at this point, but I figured I might as well link to them anyway. There are 19 videos in the playlist.
iii. Mind the Gap: Disparity Between Research Funding and Costs of Care for Diabetic Foot Ulcers. A brief comment from this month’s issue of Diabetes Care. The main point:
“Diabetic foot ulceration (DFU) is a serious and prevalent complication of diabetes, ultimately affecting some 25% of those living with the disease (1). DFUs have a consistently negative impact on quality of life and productivity […] Patients with DFUs also have morbidity and mortality rates equivalent to aggressive forms of cancer (2). These ulcers remain an important risk factor for lower-extremity amputation as up to 85% of amputations are preceded by foot ulcers (6). It should therefore come as no surprise that some 33% of the $116 billion in direct costs generated by the treatment of diabetes and its complications was linked to the treatment of foot ulcers (7). Another study has suggested that 25–50% of the costs related to inpatient diabetes care may be directly related to DFUs (2). […] The cost of care of people with diabetic foot ulcers is 5.4 times higher in the year after the first ulcer episode than the cost of care of people with diabetes without foot ulcers (10). […]
We identified 22,531 NIH-funded projects in diabetes between 2002–2011. Remarkably, of these, only 33 (0.15%) were specific to DFUs. Likewise, these 22,531 NIH-funded projects yielded $7,161,363,871 in overall diabetes funding, and of this, only $11,851,468 (0.17%) was specific to DFUs. Thus, a 604-fold difference exists between overall diabetes funding and that allocated to DFUs. […] As DFUs are prevalent and have a negative impact on the quality of life of patients with diabetes, it would stand to reason that U.S. federal funding specifically for DFUs would be proportionate with this burden. Unfortunately, this yawning gap in funding (and commensurate development of a culture of sub-specialty research) stands in stark contrast to the outsized influence of DFUs on resource utilization within diabetes care. This disparity does not appear to be isolated to [the US].”
I’ve read about diabetic foot care before, but I had no idea about this stuff. Of the roughly 175.000 peer-reviewed publications about diabetes published in the period of 2000-2009, only 1200 of them – 0.69% – were about the diabetic foot. You can quibble over the cost estimates and argue that perhaps they’ve overstated because these guys want more money, but I think that it’s highly unlikely that the uncertainties related to the cost estimates are so big as to somehow make the current (research) ressource allocation scheme appear cost efficient in a CBA with reasonable assumptions – there simply has to be some low-hanging fruit here.
A slightly related (if you stretch the definition of ‘related’ a little) article which I also found interesting here.
iv. “How quickly would the ocean’s drain if a circular portal 10 meters in radius leading into space was created at the bottom of Challenger Deep, the deepest spot in the ocean? How would the Earth change as the water is being drained?”
And, “Supposing you did Drain the Oceans, and dumped the water on top of the Curiosity rover, how would Mars change as the water accumulated?”
v. Take news of cancer ‘breakthrough’ with a big grain of salt. I’d have added the word ‘any’ and probably an ‘s’ to the word breakthrough as well if I’d authored the headline, in order to make a more general point – but be that as it may… The main thrust:
“scientific breakthroughs should not be announced at press conferences using the vocabulary of public relations professionals.
The language of science and medicine should be cautious and humble because diseases like cancer are relentless and humbling. […]
The reality is that biomedical research is a slow process that yields small incremental results. If there is a lesson to retain from the tale of CFI-400945, it’s that finding new treatments takes a lot of time and a lot of money. It is a venture worthy of support, but unworthy of exaggerated expectations and casual overstatement.
Hype only serves to create false hope.”
People who’re not familiar with how science actually works (and how related processes such as drug development work) often have weird ideas about how fast things tend to proceed and how (/un?)likely a ‘promising’ result in the lab might be to be translated into, say, a new treatment option available to the general patient population. And yeah, that set of ‘people who’re not familiar with how science works’ would include almost everybody.
It should be noted, as I’m sure Picard knows, that it’s a lot easier to get funding for your project if you’re exaggerating benefits and downplaying costs; if you’re too optimistic; if you’re saying nice things about the guy writing the checks even though you think he’s an asshole; etc. Some types of dishonesty are probably best perceived of as nothing more than ‘good salesmanship’ whereas other types might have different interpretations; but either way it’d be silly to pretend that stuff like false hope does not sell a lot of tickets (and newspapers, and diluted soap water, and…). Given that, it’s hardly likely that things will change much anytime soon – the demand for information here is much higher than is the demand for accurate information. But it’s nice to read an article like this one every now and then anyway.
i. Victorian era. This is a fascinating article, with lots of stuff:
“The Victorian era of British history was the period of Queen Victoria‘s reign from 20 June 1837 until her death on 22 January 1901. It was a long period of peace, prosperity, refined sensibilities and national self-confidence for Britain. Some scholars date the beginning of the period in terms of sensibilities and political concerns to the passage of the Reform Act 1832.
The era was preceded by the Georgian period and followed by the Edwardian period. The latter half of the Victorian age roughly coincided with the first portion of the Belle Époque era of continental Europe and the Gilded Age of the United States.
Culturally there was a transition away from the rationalism of the Georgian period and toward romanticism and mysticism with regard to religion, social values, and the arts. In international relations the era was a long period of peace, known as the Pax Britannica, and economic, colonial, and industrial consolidation, temporarily disrupted by the Crimean War in 1854. The end of the period saw the Boer War. Domestically, the agenda was increasingly liberal with a number of shifts in the direction of gradual political reform, industrial reform and the widening of the voting franchise. […]
The population of England almost doubled from 16.8 million in 1851 to 30.5 million in 1901. Scotland’s population also rose rapidly, from 2.8 million in 1851 to 4.4 million in 1901. Ireland’s population decreased rapidly, from 8.2 million in 1841 to less than 4.5 million in 1901, mostly due to the Great Famine. At the same time, around 15 million emigrants left the United Kingdom in the Victorian era and settled mostly in the United States, Canada, and Australia. […]
The mortality rates in England changed greatly through the 19th century. There was no catastrophic epidemic or famine in England or Scotland in the 19th century – it was the first century in which a major epidemic did not occur throughout the whole country, with deaths per 1000 of population per year in England and Wales dropping from 21.9 from 1848–54 to 17 in 1901 (contrasting with, for instance, 5.4 in 1971). […]
The Victorian era became notorious for the employment of young children in factories and mines and as chimney sweeps. Child labour, often brought about by economic hardship, played an important role in the Industrial Revolution from its outset: Charles Dickens, for example, worked at the age of 12 in a blacking factory, with his family in a debtors’ prison. In 1840 only about 20 percent of the children in London had any schooling. By 1860 about half of the children between 5 and 15 were in school (including Sunday school).
The children of the poor were expected to help towards the family budget, often working long hours in dangerous jobs for low wages. Agile boys were employed by the chimney sweeps; small children were employed to scramble under machinery to retrieve cotton bobbins; and children were also employed to work in coal mines, crawling through tunnels too narrow and low for adults. Children also worked as errand boys, crossing sweepers, shoe blacks, or sold matches, flowers, and other cheap goods. Some children undertook work as apprentices to respectable trades, such as building, or as domestic servants (there were over 120,000 domestic servants in London in the mid 18th century). Working hours were long: builders might work 64 hours a week in summer and 52 in winter, while domestic servants worked 80 hour weeks. Many young people worked as prostitutes (the majority of prostitutes in London were between 15 and 22 years of age). […]
Children as young as four were put to work. In coal mines children began work at the age of 5 and generally died before the age of 25. Many children (and adults) worked 16 hour days. As early as 1802 and 1819, Factory Acts were passed to limit the working hours of workhouse children in factories and cotton mills to 12 hours per day. These acts were largely ineffective […]
Beginning in the late 1840s, major news organisations, clergymen, and single women became increasingly concerned about prostitution, which came to be known as “The Great Social Evil”. Estimates of the number of prostitutes in London in the 1850s vary widely (in his landmark study, Prostitution, William Acton reported that the police estimated there were 8,600 in London alone in 1857). When the United Kingdom Census 1851 publicly revealed a 4% demographic imbalance in favour of women (i.e., 4% more women than men), the problem of prostitution began to shift from a moral/religious cause to a socio-economic one. The 1851 census showed that the population of Great Britain was roughly 18 million; this meant that roughly 750,000 women would remain unmarried simply because there were not enough men. These women came to be referred to as “superfluous women” or “redundant women”, and many essays were published discussing what, precisely, ought to be done with them. […] Divorce legislation introduced in 1857 allowed for a man to divorce his wife for adultery, but a woman could only divorce if adultery were accompanied by cruelty. The anonymity of the city led to a large increase in prostitution and unsanctioned sexual relationships.”
An image from the article, displaying “working class life in Victorian Wetherby, West Yorkshire”:
ii. Landlocked country.
“A landlocked country is a country entirely enclosed by land, or whose only coastlines lie on closed seas. There are 48 landlocked countries in the world, including partially recognized states. No landlocked countries are found on North American, Australian, and inhospitable Antarctic continents. The general economic and other disadvantages experienced by landlocked countries makes the majority of these countries Landlocked Developing Countries (LLDCs). Nine of the twelve countries with the lowest HDI scores are landlocked. […] Historically, being landlocked was regarded as a disadvantageous position. It cuts the country off from sea resources such as fishing, but more importantly cuts off direct access to seaborne trade which makes up a large percentage of international trade. Coastal regions tended to be wealthier and more heavily populated than inland ones. […] Landlocked developing countries have significantly higher costs of international cargo transportation compared to coastal developing countries (in Asia the ratio is 3:1).”
Landlocked countries make out 11,4% of the total land area of Earth, and the countries make out an estimated 6,9% of the world population.
“A landlocked country surrounded only by other landlocked countries may be called a “doubly landlocked” country. A person in such a country has to cross at least two borders to reach a coastline.
There are currently two such countries in the world:
- Liechtenstein in Central Europe surrounded by Switzerland and Austria.
- Uzbekistan in Central Asia surrounded by Afghanistan, Kazakhstan, Kyrgyzstan, Tajikistan, and Turkmenistan.
“The 1842 Kabul Retreat (or Massacre of Elphinstone’s Army) was the entire loss of a combined force of British and Indian troops from the British East India Company and the deaths of thousands of civilians in Afghanistan between 6-13 January 1842. The massacre, which happened during the First Anglo-Afghan War, occurred when Major General Sir William Elphinstone attempted to lead a military and civilian column of Europeans and Indians from Kabul back to the British garrison at Jalalabad more than 90 miles (140 km) away. They were forced to leave because of an uprising led by Akbar Khan, the son of the deposed Afghan leader, Dost Mohammad Khan.
Afghan tribes launched numerous attacks against the column as it made slow progress through the winter snows of the Hindu Kush. In total the India Company army lost 4,500 troops, along with 12,000 civilian workers, family members and other camp-followers. The final stand was made just outside a village called Gandamak on 13 January.
Out of more than 16,000 people from the column commanded by Elphinstone, only one European, an Assistant Surgeon named William Brydon, and a few sepoys would eventually reach Jalalabad. The Afghanis subsequently released a number of British prisoners and civilian hostages. However many Indians were not handed back and were instead sold into slavery or killed.
Sir Willoughby Cotton was replaced as commander of the remaining British troops by the ageing and infirm Sir William Elphinstone. The 59-year-old Major General, who was initially unwilling to accept the appointment, had entered the British army in 1804. He was made a Companion of the Bath for leading the 33rd Regiment of Foot at the Battle of Waterloo. By 1825 he had been promoted to colonel and made a major-general in 1837. Although Elphinstone was a man of high birth and perfect manners, his colleague and contemporary General William Nott regarded him as “the most incompetent soldier who ever became general”. […]
Throughout the third day, the column laboured through the pass. Once the main body had moved through, the Afghans left their positions to massacre the stragglers and the wounded. By the evening of 9 January, the column had only moved 25 miles (40 km) but already 3,000 people had died. Most had been killed in the fighting, but some had frozen to death or even taken their own lives.
By the fourth day, a few hundred soldiers deserted and tried to return to Kabul but they were all killed. By now Elphinstone, who had ceased giving orders, sat silently on his horse. On the evening of 11 January, Lady Sale, along with the wives and children of both British and Indian officers, and their retinues, accepted Akbar Khan’s assurances of protection. Despite deep mistrust, the group was taken into the custody of Akbar’s men. However once they were hostages, all the Indian servants and sepoy wives were murdered. Akbar Khan’s envoys then returned and persuaded Elphinstone and his second in command, Brigadier Shelton, to become hostages, too. Both senior officers agreed to surrender, abandoning their men to their fate. Elphinstone died on 23 April as a captive. […] On 13 January, a British officer from the 16,000 strong column rode into Jalalabad on a wounded horse (a few sepoys, who had hidden in the mountains, followed in the coming weeks). The sole survivor of the 12-man cavalry group, assistant Surgeon William Brydon, was asked upon arrival what happened to the army, to which he answered “I am the army”. Although part of his skull had been sheared off by a sword, he ultimately survived because he had insulated his hat with a magazine which deflected the blow. […]
The annihilation of about 16,500 people left Britain and India in shock and the Governor General, Lord Auckland, suffered a stroke upon hearing the news. In the Autumn of 1842 an “Army of Retribution” led by Sir George Pollock, with William Nott and Robert Sale commanding divisions, levelled Kabul. Sale personally rescued his wife Lady Sale and some other hostages from the hands of Akbar Khan. However, the slaughter of an army by Afghan tribesmen was humiliating for the British authorities in India.
Of the British prisoners, 32 officers, over 50 soldiers, 21 children and 12 women survived to be released in September 1842. An unknown number of sepoys and other Indian prisoners were sold into slavery in Kabul or kept as captives in mountain villages. […]
The leadership of Elphinstone is seen as a notorious example of how the ineptitude and indecisiveness of a senior officer could compromise the morale and effectiveness of a whole army (though already much depleted). Elphinstone completely failed to lead his soldiers, but fatally exerted enough authority to prevent any of his officers from exercising proper command in his place.”
iv. Alcatraz Federal Penitentiary. (‘good article’)
“The Alcatraz Federal Penitentiary or United States Penitentiary, Alcatraz Island (often just referred to as Alcatraz) was a maximum high-security Federal prison on Alcatraz Island, 1.25 miles (2.01 km) off the coast of San Francisco, California, USA, which operated from 1934 to 1963. […]
Alcatraz was designed to hold prisoners who continuously caused trouble at other federal prisons. One of the world’s most notorious, and best known prisons over the years, Alcatraz housed some 1576 of America’s most ruthless criminals […] Faced with high running maintenance costs and a poor reputation, Alcatraz closed on March 21, 1963. […]
The prison cells typically measured 9 feet (2.7 m) by 5 feet (1.5 m) and 7 feet (2.1 m) high. The cells were primitive and lacked privacy, with a bed, a desk and a washbasin and toilet on the back wall and few furnishings except a blanket. Black people were segregated from the rest in cell designation due to racial abuse being prevalent. […]
By the 1950s, the prison conditions had improved and prisoners were gradually permitted more privileges such as the playing of musical instruments, watching movies at weekends, painting, and radio use; the strict code of silence became more relaxed and prisoners were permitted to talk quietly. However, the prison continued to be unpopular on the mainland into the 1950s; it was by far the most expensive prison institution in the United States and continued to be perceived by many as America’s most extreme jail. […] A 1959 report indicated that Alcatraz was more than three times more expensive to run than the average US prison; $10 per prisoner per day compared to $3 in most others prisons. The problem of Alcatraz was exacerbated by the fact that the prison had seriously deteriorated structurally in exposure to the salt air and wind and would need $5 million to deal with it. Major repairs began in 1958 but by 1961 the prison was evaluated by engineers to be a lost cause and Robert F. Kennedy submitted plans for a new maximum-security institution at Marion, Illinois. After the escape from Alcatraz in June 1962, the prison was the subject of heated investigations, and with the major structural problems and ongoing expense, the prison finally closed on 21 March 1963. […] Today the penitentiary is a museum and one of San Francisco’s major tourist attractions, attracting some 1.5 million visitors annually. […]
Security in the prison was very tight, with the constant checking of bars, doors, locks, electrical fixtures etc., to ensure that security hadn’t been broken. During a standard day the prisoners would be counted 13 times, and the ratio of prisoners to guards was the lowest of any American prison of the time. […]
The library, which utilized a closed-stack paging system, had a collection of 10,000 to 15,000 books […] The average prisoner read 75 to 100 books a year.”
Pathological science is the process by which “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions”. The term was first used by Irving Langmuir, Nobel Prize-winning chemist, during a 1953 colloquium at the Knolls Research Laboratory. Langmuir said a pathological science is an area of research that simply will not “go away”—long after it was given up on as ‘false’ by the majority of scientists in the field. He called pathological science “the science of things that aren’t so”.
Bart Simon lists it among practices pretending to be science: “categories [.. such as ..] pseudoscience, amateur science, deviant or fraudulent science, bad science, junk science, and popular science [..] pathological science, cargo-cult science, and voodoo science ..”. Examples of pathological science may include homeopathy, Martian canals, N-rays, polywater, water memory, perpetual motion, and cold fusion. The theories and conclusions behind all of these examples are currently rejected or disregarded by the majority of scientists. […]
Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to the scientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation (see the Observer-expectancy effect, and cognitive bias). Some characteristics of pathological science are:
- The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
- The effect is of a magnitude that remains close to the limit of detectability, or many measurements are necessary because of the very low statistical significance of the results.
- There are claims of great accuracy.
- Fantastic theories contrary to experience are suggested.
- Criticisms are met by ad hoc excuses.
- The ratio of supporters to critics rises and then falls gradually to oblivion.
Langmuir never intended the term to be rigorously defined; it was simply the title of his talk on some examples of “weird science”.”
“Upwelling is an oceanographic phenomenon that involves wind-driven motion of dense, cooler, and usually nutrient-rich water towards the ocean surface, replacing the warmer, usually nutrient-depleted surface water. The nutrient-rich upwelled water stimulates the growth and reproduction of primary producers such as phytoplankton. Due to the biomass of phytoplankton and presence of cool water in these regions, upwelling zones can be identified by cool sea surface temperatures (SST) and high concentrations of chlorophyll-a.
The increased availability in upwelling regions results in high levels of primary productivity and thus fishery production. Approximately 25% of the total global marine fish catches come from five upwellings that occupy only 5% of the total ocean area.”
This book is crap, stay away from it. It’s very short, which was the only reason why I actually read it cover to cover. Kosso neglects some very important points you’d want to see in a publication like this; on the list of recommended reading he includes Kuhn but not Popper, and Popper’s name isn’t even mentioned. Presumably because he disagrees with Popper about the importance of falsification. Conceptually he doesn’t talk about and doesn’t seem to understand how crucial is the requirement in science that you restrict the (potential) outcome space when forming hypotheses. He picks out history and archaeology as examples of ‘social sciences’; maybe because that’s the closest he’s ever been to the social sciences? He talks about how experimental designs can play a role here, but doesn’t include a single word about the role of statistics in scientific disciplines.
I’d probably give it 1 out of 5 on amazon. He reads as if he doesn’t have a clue. The only good thing about the book is that it is quite short.
This is great stuff:
I’m sure I’ve seen some of this stuff before (Razib Khan may have covered it), but I’m pretty sure I have not blogged it. Link to the source here, click to view the figures/tables in full size:
Of course formal education matters, a lot:
Interestingly, the link also has data related to a recent post:
It seems that public opinion doesn’t change very much over time. I thought this last one was interesting (if anyone knows of any related Danish data, let me know in the comment section):
Note that this is only the “very great prestige”-proportion, so there may be stuff going on we don’t know about. Note how much both ‘teacher’ and ‘military officer’ has changed over time. Something funny may be going on here; ‘farmer’ is more prestigious than ‘Member of Congress’ and ‘Lawyer’ (‘well of course it is,’ you might say, but…).
I just had to share this (US data, from the GSS):
Razib doesn’t comment, he just gives you the data. I won’t comment much on the above graph now either, but that’s only because I’m dumbstruck right now – I have just no idea what to even say to all those people who do not strongly disagree.
Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data, by Daniele Fanelli (link). Abstract:
“The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state’s per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions’ prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists’ productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.”
An important bit on ‘”negative” results’ from the paper:
“Words like “positive”, “significant”, “negative” or “null” are common scientific jargon, but are obviously misleading, because all results are equally relevant to science, as long as they have been produced by sound logic and methods [11,12]. Yet, literature surveys and meta-analyses have extensively documented an excess of positive and/or statistically significant results in fields and subfields of, for example, biomedicine , biology , ecology and evolution , psychology , economics , sociology .
Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmationbiased (i.e. tend to select information that supports their hypotheses about the world) [19,20,21], and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed . This bias is likely to be reinforced by a positive feedback from the scientific community. Since papers reporting positive results attract more interest and are cited more often, journal editors and peer reviewers might tend to favour them, which will further increase the desirability of a positive outcome to researchers, particularly if their careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in.
Confronted with a “negative” result, therefore, a scientist might be tempted to either not spend time publishing it (what is often called the “file-drawer effect”, because negative papers are imagined to lie in scientists’ drawers) or to turn it somehow into a positive result. This can be done by re-formulating the hypothesis (sometimes referred to as HARKing: Hypothesizing After the Results are Known ), by selecting the results to be published , by tweaking data or analyses to “improve” the outcome, or by willingly and consciously falsifying them . Data fabrication and falsification are probably rare, but other questionable research practices might be relatively common .”
I had an interesting discussion yesterday which touched briefly upon a few of these subjects, so I decided to take a closer look at the data just to make sure I wasn’t completely wrong about the stuff I thought I knew – and now I’m glad I did as I seem to have somehow picked up a mistaken idea about the land area of the Southern Hemisphere (I thought it was even smaller than it is). Now, if you asked a random guy he wouldn’t know most of these numbers or even the relevant neighbourhood. Somehow I feel like people should. So here we go, most of these numbers are pulled from wikipedia:
1. Asia covers 8.7 % of the Earth’s total surface area and hosts ~60 % of the world’s current human population. It covers 29.5 % of the land area of Earth.
1a. Africa covers 6 % of the Earth’s total surface area and hosts ~14-15 % of the world’s population. It covers 20.4% of the total land area.
1b. North America: 4.8 % of surface area, 8 % of population. 16.5 % of total land area.
1c. South America: 3.5 % of surface area, 6 % of population. 12.0 % of total land area.
1d. Antarctica: 2.7 % of surface area, 0 % of population. 9.2% of total land area.
1e. Europe: 2 % of surface area, 11.5 % of population. 6.8 % of total land area.
1f. Australia: 1.5 % of surface area, 0.5 % of population. 5.1 % of total land area.
2. Russia covers 17,075,400 square kilometres. Europe and Australia combined make out ~17,8 mio. square kilometres, a number which incidentally is about the same as South America. So if we for a moment disregard the fact that Russia already makes up 40 % of the total area of Europe, it’s large enough to almost cover the two smallest continents combined.
3. According to a 2010 census, the population of China was/is 1,339,724,852 – which is more than 19 % of the population of Earth. This is a higher population than that of any single continent which is not Asia. The population of China is significantly larger than the combined populations of South America (385,7 mio), North America (529 mio) and Australia (31,26). It’s larger than the combined populations of Europe and North America. Here’s a neat image comparing sizes and populations of the continents.
4. This source notes that: “In the Northern Hemisphere, the ratio of land to ocean is about 1 to 1.5. The ratio of land to ocean in the Southern Hemisphere is 1 to 4.” Translating those ratios into percentages of the hemispheres, it turns out that in the Northern Hemisphere 60 % of the area is made up of ocean and 40 % is covered by land, whereas only 20 % of the Southern Hemisphere is covered by land and 80 % is covered by ocean. Oceans cover roughly 70,8 % of the total area of earth and land masses cover 29,2 %, so these numbers are probably ok. Here’s an image from Wikipedia:
About 90 percent of the human population lives on the Northern Hemisphere – the combined human population of the entire Southern Hemisphere is smaller than the population of Europe.
4a. The Pacific Ocean covers a larger area than all land masses of Earth combined.
4b. The Atlantic Ocean covers as a very rough approximation the same area (106 mio. square kilometres) as the total land area of the Northern Hemisphere. It covers an area corresponding to more than 70 percent of the total land area of earth.
4c. The Indian Ocean covers 68,556,000 square kilometres, approximately the same area as Asia and North America combined.
4d. The average depth of the world oceans is about 3.8 kilometers (link).
5. I can’t copy the image, but go here for a really neat illustration of the surface elevation of the areas of Earth – I’m really annoyed I can’t copy this and put it in the post. Antarctica has by far the highest mean elevation of all continents. According to this source, the mean elevation of the continent is 2,286 m. Disregarding Antarctica (which can be considered somewhat an outlier because of the ice-thing), it seems that there’s a connection between the area of a continent and its mean elevation – i.e. the larger the area of the continent, the higher the elevation. Here’s a relevant paper.)