Econstudentlog

Imitation Games – Avi Wigderson

If you wish to skip the introduction the talk starts at 5.20. The talk itself lasts roughly an hour, with the last ca. 20 minutes devoted to Q&A – that part is worth watching as well.

Some links related to the talk below:

Theory of computation.
Turing test.
COMPUTING MACHINERY AND INTELLIGENCE.
Probabilistic encryption & how to play mental poker keeping secret all partial information Goldwasser-Micali82.
Probabilistic algorithm
How To Generate Cryptographically Strong Sequences Of Pseudo-Random Bits (Blum&Micali, 1984)
Randomness extractor
Dense graph
Periodic sequence
Extremal graph theory
Szemerédi’s theorem
Green–Tao theorem
Szemerédi regularity lemma
New Proofs of the Green-Tao-Ziegler Dense Model Theorem: An Exposition
Calibrating Noise to Sensitivity in Private Data Analysis
Generalization in Adaptive Data Analysis and Holdout Reuse
Book: Math and Computation | Avi Wigderson
One-way function
Lattice-based cryptography

August 23, 2021 Posted by | Computer science, Cryptography, Data, Lectures, Mathematics, Science, Statistics | Leave a comment

James Simons interview


James Simons. Differential geometry. Minimal varieties in riemannian manifolds. Shiing-Shen Chern. Characteristic Forms and Geometric Invariants. Renaissance Technologies.

“That’s really what’s great about basic science and in this case mathematics, I mean, I didn’t know any physics. It didn’t occur to me that this material, that Chern and I had developed would find use somewhere else altogether. This happens in basic science all the time that one guy’s discovery leads to someone else’s invention and leads to another guy’s machine or whatever it is. Basic science is the seed corn of our knowledge of the world. …I loved the subject, but I liked it for itself, I wasn’t thinking of applications. […] the government’s not doing such a good job at supporting basic science and so there’s a role for philanthropy, an increasingly important role for philanthropy.”

“My algorithm has always been: You put smart people together, you give them a lot of freedom, create an atmosphere where everyone talks to everyone else. They’re not hiding in the corner with their own little thing. They talk to everybody else. And you provide the best infrastructure. The best computers and so on that people can work with and make everyone partners.”

“We don’t have enough teachers of mathematics who know it, who know the subject … and that’s for a simple reason: 30-40 years ago, if you knew some mathematics, enough to teach in let’s say high school, there weren’t a million other things you could do with that knowledge. Oh yeah, maybe you could become a professor, but let’s suppose you’re not quite at that level but you’re good at math and so on.. Being a math teacher was a nice job. But today if you know that much mathematics, you can get a job at Google, you can get a job at IBM, you can get a job in Goldman Sachs, I mean there’s plenty of opportunities that are going to pay more than being a high school teacher. There weren’t so many when I was going to high school … so the quality of high school teachers in math has declined, simply because if you know enough to teach in high school you know enough to work for Google…”

January 12, 2021 Posted by | Mathematics, Papers, Science | Leave a comment

The pleasure of finding things out (II)

Here’s my first post about the book. In this post I have included a few more quotes from the last half of the book.

“Are physical theories going to keep getting more abstract and mathematical? Could there be today a theorist like Faraday in the early nineteenth century, not mathematically sophisticated but with a very powerful intuition about physics?
Feynman: I’d say the odds are strongly against it. For one thing, you need the math just to understand what’s been done so far. Beyond that, the behavior of subnuclear systems is so strange compared to the ones the brain evolved to deal with that the analysis has to be very abstract: To understand ice, you have to understand things that are themselves very unlike ice. Faraday’s models were mechanical – springs and wires and tense bands in space – and his images were from basic geometry. I think we’ve understood all we can from that point of view; what we’ve found in this century is different enough, obscure enough, that further progress will require a lot of math.”

“There’s a tendency to pomposity in all this, to make it all deep and profound. My son is taking a course in philosophy, and last night we were looking at something by Spinoza – and there was the most childish reasoning! There were all these Attributes, and Substances, all this meaningless chewing around, and we started to laugh. Now, how could we do that? Here’s this great Dutch philosopher, and we’re laughing at him. It’s because there was no excuse for it! In that same period there was Newton, there was Harvey studying the circulation of the blood, there were people with methods of analysis by which progress was being made! You can take every one of Spinoza’s propositions, and take the contrary propositions, and look at the world – and you can’t tell which is right. Sure, people were awed because he had the courage to take on these great questions, but it doesn’t do any good to have the courage if you can’t get anywhere with the question. […] It isn’t the philosophy that gets me, it’s the pomposity. If they’d just laugh at themselves! If they’d just say, “I think it’s like this, but von Leipzig thought it was like that, and he had a good shot at it, too.” If they’d explain that this is their best guess … But so few of them do”.

“The lesson you learn as you grow older in physics is that what we can do is a very small fraction of what there is. Our theories are really very limited.”

“The first principle is that you must not fool yourself – and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”

“When I was an undergraduate I worked with Professor Wheeler* as a research assistant, and we had worked out together a new theory about how light worked, how the interaction between atoms in different places worked; and it was at that time an apparently interesting theory. So Professor Wigner†, who was in charge of the seminars there [at Princeton], suggested that we give a seminar on it, and Professor Wheeler said that since I was a young man and hadn’t given seminars before, it would be a good opportunity to learn how to do it. So this was the first technical talk that I ever gave. I started to prepare the thing. Then Wigner came to me and said that he thought the work was important enough that he’d made special invitations to the seminar to Professor Pauli, who was a great professor of physics visiting from Zurich; to Professor von Neumann, the world’s greatest mathematician; to Henry Norris Russell, the famous astronomer; and to Albert Einstein, who was living near there. I must have turned absolutely white or something because he said to me, “Now don’t get nervous about it, don’t be worried about it. First of all, if Professor Russell falls asleep, don’t feel bad, because he always falls asleep at lectures. When Professor Pauli nods as you go along, don’t feel good, because he always nods, he has palsy,” and so on. That kind of calmed me down a bit”.

“Well, for the problem of understanding the hadrons and the muons and so on, I can see at the present time no practical applications at all, or virtually none. In the past many people have said that they could see no applications and then later they found applications. Many people would promise under those circumstances that something’s bound to be useful. However, to be honest – I mean he looks foolish; saying there will never be anything useful is obviously a foolish thing to do. So I’m going to be foolish and say these damn things will never have any application, as far as I can tell. I’m too dumb to see it. All right? So why do you do it? Applications aren’t the only thing in the world. It’s interesting in understanding what the world is made of. It’s the same interest, the curiosity of man that makes him build telescopes. What is the use of discovering the age of the universe? Or what are these quasars that are exploding at long distances? I mean what’s the use of all that astronomy? There isn’t any. Nonetheless, it’s interesting. So it’s the same kind of exploration of our world that I’m following and it’s curiosity that I’m satisfying. If human curiosity represents a need, the attempt to satisfy curiosity, then this is practical in the sense that it is that. That’s the way I would look at it at the present time. I would not put out any promise that it would be practical in some economic sense.”

“To science we also bring, besides the experiment, a tremendous amount of human intellectual attempt at generalization. So it’s not merely a collection of all those things which just happen to be true in experiments. It’s not just a collection of facts […] all the principles must be as wide as possible, must be as general as possible, and still be in complete accord with experiment, that’s the challenge. […] Evey one of the concepts of science is on a scale graduated somewhere between, but at neither end of, absolute falsity or absolute truth. It is necessary, I believe, to accept this idea, not only for science, but also for other things; it is of great value to acknowledge ignorance. It is a fact that when we make decisions in our life, we don’t necessarily know that we are making them correctly; we only think that we are doing the best we can – and that is what we should do.”

“In this age of specialization, men who thoroughly know one field are often incompetent to discuss another.”

“I believe that moral questions are outside of the scientific realm. […] The typical human problem, and one whose answer religion aims to supply, is always of the following form: Should I do this? Should we do this? […] To answer this question we can resolve it into two parts: First – If I do this, what will happen? – and second – Do I want that to happen? What would come of it of value – of good? Now a question of the form: If I do this, what will happen? is strictly scientific. […] The technique of it, fundamentally, is: Try it and see. Then you put together a large amount of information from such experiences. All scientists will agree that a question – any question, philosophical or other – which cannot be put into the form that can be tested by experiment (or, in simple terms, that cannot be put into the form: If I do this, what will happen?) is not a scientific question; it is outside the realm of science.”

June 26, 2019 Posted by | Astronomy, Books, Mathematics, Philosophy, Physics, Quotes/aphorisms, Science | Leave a comment

The pleasure of finding things out (I?)

As I put it in my goodreads review of the book, “I felt in good company while reading this book“. Some of the ideas in the book are by now well known, for example some of the interview snippets also included in the book have been added to youtube and have been viewed by hundreds of thousands of people (I added a couple of them to my ‘about’ page some years ago, and they’re still there, these are enjoyable videos to watch and they have aged well!) (the overlap between the book’s text and the sound recordings available is not 100 % for this material, but it’s close enough that I assume these were the same interviews). Others ideas and pieces I would assume to be less well known, for example Feynman’s encounter with Uri Geller in the latter’s hotel room, where he was investigating the latter’s supposed abilities related to mind reading and key bending..

I have added some sample quotes from the book below. It’s a good book, recommended.

“My interest in science is to simply find out about the world, and the more I find out the better it is, like, to find out. […] You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about […] I don’t have to know an answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn’t frighten me.”

“Some people look at the activity of the brain in action and see that in many respects it surpasses the computer of today, and in many other respects the computer surpasses ourselves. This inspires people to design machines that can do more. What often happens is that an engineer has an idea of how the brain works (in his opinion) and then designs a machine that behaves that way. This new machine may in fact work very well. But, I must warn you that that does not tell us anything about how the brain actually works, nor is it necessary to ever really know that, in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how the feathers are designed in order to make a flying machine. It is not necessary to understand the lever system in the legs of a cheetah – an animal that runs fast – in order to make an automobile with wheels that goes very fast. It is therefore not necessary to imitate the behavior of Nature in detail in order to engineer a device which can in many respects surpass Nature’s abilities.”

“These ideas and techniques [of scientific investigation] , of course, you all know. I’ll just review them […] The first is the matter of judging evidence – well, the first thing really is, before you begin you must not know the answer. So you begin by being uncertain as to what the answer is. This is very, very important […] The question of doubt and uncertainty is what is necessary to begin; for if you already know the answer there is no need to gather any evidence about it. […] We absolutely must leave room for doubt or there is no progress and there is no learning. There is no learning without having to pose a question. And a question requires doubt. […] Authority may be a hint as to what the truth is, but it is not the source of information. As long as it’s possible, we should disregard authority whenever the observations disagree with it. […] Science is the belief in the ignorance of experts.”

“If we look away from the science and look at the world around us, we find out something rather pitiful: that the environment that we live in is so actively, intensely unscientific. Galileo could say: “I noticed that Jupiter was a ball with moons and not a god in the sky. Tell me, what happened to the astrologers?” Well, they print their results in the newspapers, in the United States at least, in every daily paper every day. Why do we still have astrologers? […] There is always some crazy stuff. There is an infinite amount of crazy stuff, […] the environment is actively, intensely unscientific. There is talk about telepathy still, although it’s dying out. There is faith-healing galore, all over. There is a whole religion of faith-healing. There’s a miracle at Lourdes where healing goes on. Now, it might be true that astrology is right. It might be true that if you go to the dentist on the day that Mars is at right angles to Venus, that it is better than if you go on a different day. It might be true that you can be cured by the miracle of Lourdes. But if it is true it ought to be investigated. Why? To improve it. If it is true then maybe we can find out if the stars do influence life; that we could make the system more powerful by investigating statistically, scientifically judging the evidence objectively, more carefully. If the healing process works at Lourdes, the question is how far from the site of the miracle can the person, who is ill, stand? Have they in fact made a mistake and the back row is really not working? Or is it working so well that there is plenty of room for more people to be arranged near the place of the miracle? Or is it possible, as it is with the saints which have recently been created in the United States–there is a saint who cured leukemia apparently indirectly – that ribbons that are touched to the sheet of the sick person (the ribbon having previously touched some relic of the saint) increase the cure of leukemia–the question is, is it gradually being diluted? You may laugh, but if you believe in the truth of the healing, then you are responsible to investigate it, to improve its efficiency and to make it satisfactory instead of cheating. For example, it may turn out that after a hundred touches it doesn’t work anymore. Now it’s also possible that the results of this investigation have other consequences, namely, that nothing is there.”

“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy – and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter.”

“If we want to solve a problem that we have never solved before, we must leave the door to the unknown ajar.”

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

“I would like to say a word or two […] about words and definitions, because it is necessary to learn the words. It is not science. That doesn’t mean just because it is not science that we don’t have to teach the words. We are not talking about what to teach; we are talking about what science is. It is not science to know how to change centigrade to Fahrenheit. It’s necessary, but it is not exactly science. […] I finally figured out a way to test whether you have taught an idea or you have only taught a definition. Test it this way: You say, “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language.”

“My father dealt a little bit with energy and used the term after I got a little bit of the idea about it. […] He would say, “It [a toy dog] moves because the sun is shining,” […]. I would say “No. What has that to do with the sun shining? It moved because I wound up the springs.” “And why, my friend, are you able to move to wind up this spring?” “I eat.” “What, my friend, do you eat?” “I eat plants.” “And how do they grow?” “They grow because the sun is shining.” […] The only objection in this particular case was that this was the first lesson. It must certainly come later, telling you what energy is, but not to such a simple question as “What makes a [toy] dog move?” A child should be given a child’s answer. “Open it up; let’s look at it.””

“Now the point of this is that the result of observation, even if I were unable to come to the ultimate conclusion, was a wonderful piece of gold, with a marvelous result. It was something marvelous. Suppose I were told to observe, to make a list, to write down, to do this, to look, and when I wrote my list down, it was filed with 130 other lists in the back of a notebook. I would learn that the result of observation is relatively dull, that nothing much comes of it. I think it is very important – at least it was to me – that if you are going to teach people to make observations, you should show that something wonderful can come from them. […] [During my life] every once in a while there was the gold of a new understanding that I had learned to expect when I was a kid, the result of observation. For I did not learn that observation was not worthwhile. […] The world looks so different after learning science. For example, the trees are made of air, primarily. When they are burned, they go back to air, and in the flaming heat is released the flaming heat of the sun which was bound in to convert the air into trees, and in the ash is the small remnant of the part which did not come from air, that came from the solid earth, instead. These are beautiful things, and the content of science is wonderfully full of them. They are very inspiring, and they can be used to inspire others.”

“Physicists are trying to find out how nature behaves; they may talk carelessly about some “ultimate particle” because that’s the way nature looks at a given moment, but . . . Suppose people are exploring a new continent, OK? They see water coming along the ground, they’ve seen that before, and they call it “rivers.” So they say they’re exploring to find the headwaters, they go upriver, and sure enough, there they are, it’s all going very well. But lo and behold, when they get up far enough they find the whole system’s different: There’s a great big lake, or springs, or the rivers run in a circle. You might say, “Aha! They’ve failed!” but not at all! The real reason they were doing it was to explore the land. If it turned out not to be headwaters, they might be slightly embarrassed at their carelessness in explaining themselves, but no more than that. As long as it looks like the way things are built is wheels within wheels, then you’re looking for the innermost wheel – but it might not be that way, in which case you’re looking for whatever the hell it is that you find!”

 

June 20, 2019 Posted by | Books, Physics, Science | Leave a comment

Quotes

i. “The party that negotiates in haste is often at a disadvantage.” (Howard Raiffa)

ii. “Advice: don’t embarrass your bargaining partner by forcing him or her to make all the concessions.” (-ll-)

iii. “Disputants often fare poorly when they each act greedily and deceptively.” (-ll-)

iv. “Each man does seek his own interest, but, unfortunately, not according to the dictates of reason.” (Kenneth Waltz)

v. “Whatever is said after I’m gone is irrelevant.” (Jimmy Savile)

vi. “Trust is an important lubricant of a social system. It is extremely efficient; it saves a lot of trouble to have a fair degree of reliance on other people’s word. Unfortunately this is not a commodity which can be bought very easily. If you have to buy it, you already have some doubts about what you have bought.” (Kenneth Arrow)

vii. “… an author never does more damage to his readers than when he hides a difficulty.” (Évariste Galois)

viii. “A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail” (Vladimir Voevodsky)

ix. “Suppose you want to teach the “cat” concept to a very young child. Do you explain that a cat is a relatively small, primarily carnivorous mammal with retractible claws, a distinctive sonic output, etc.? I’ll bet not. You probably show the kid a lot of different cats, saying “kitty” each time, until it gets the idea. To put it more generally, generalizations are best made by abstraction from experience. They should come one at a time; too many at once overload the circuits.” (Ralph P. Boas Jr.)

x. “Every author has several motivations for writing, and authors of technical books always have, as one motivation, the personal need to understand; that is, they write because they want to learn, or to understand a phenomenon, or to think through a set of ideas.” (Albert Wymore)

xi. “Great mathematics is achieved by solving difficult problems not by fabricating elaborate theories in search of a problem.” (Harold Davenport)

xii. “Is science really gaining in its assault on the totality of the unsolved? As science learns one answer, it is characteristically true that it also learns several new questions. It is as though science were working in a great forest of ignorance, making an ever larger circular clearing within which, not to insist on the pun, things are clear… But as that circle becomes larger and larger, the circumference of contact with ignorance also gets longer and longer. Science learns more and more. But there is an ultimate sense in which it does not gain; for the volume of the appreciated but not understood keeps getting larger. We keep, in science, getting a more and more sophisticated view of our essential ignorance.” (Warren Weaver)

xiii. “When things get too complicated, it sometimes makes sense to stop and wonder: Have I asked the right question?” (Enrico Bombieri)

xiv. “The mean and variance are unambiguously determined by the distribution, but a distribution is, of course, not determined by its mean and variance: A number of different distributions have the same mean and the same variance.” (Richard von Mises)

xv. “Algorithms existed for at least five thousand years, but people did not know that they were algorithmizing. Then came Turing (and Post and Church and Markov and others) and formalized the notion.” (Doron Zeilberger)

xvi. “When a problem seems intractable, it is often a good idea to try to study “toy” versions of it in the hope that as the toys become increasingly larger and more sophisticated, they would metamorphose, in the limit, to the real thing.” (-ll-)

xvii. “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful. This does not mean that an individual child cannot turn it into a valuable and enjoyable personal game. For some the game is scoring grades; for others it is outwitting the teacher and the system. For many, school math is enjoyable in its repetitiveness, precisely because it is so mindless and dissociated that it provides a shelter from having to think about what is going on in the classroom. But all this proves is the ingenuity of children. It is not a justifications for school math to say that despite its intrinsic dullness, inventive children can find excitement and meaning in it.” (Seymour Papert)

xviii. “The optimist believes that this is the best of all possible worlds, and the pessimist fears that this might be the case.” (Ivar Ekeland)

xix. “An equilibrium is not always an optimum; it might not even be good. This may be the most important discovery of game theory.” (-ll-)

xx. “It’s not all that rare for people to suffer from a self-hating monologue. Any good theories about what’s going on there?”

“If there’s things you don’t like about your life, you can blame yourself, or you can blame others. If you blame others and you’re of low status, you’ll be told to cut that out and start blaming yourself. If you blame yourself and you can’t solve the problems, self-hate is the result.” (Nancy Lebovitz & ‘The Nybbler’)

December 1, 2017 Posted by | Mathematics, Quotes/aphorisms, Science, Statistics | 4 Comments

Einstein quotes

“Einstein emerges from this collection of quotes, drawn from many different sources, as a complete and fully rounded human being […] Knowledge of the darker side of Einstein’s life makes his achievement in science and in public affairs even more miraculous. This book shows him as he was – not a superhuman genius but a human genius, and all the greater for being human.”

I’ve recently read The Ultimate Quotable Einstein, from the foreword of which the above quote is taken, which contains roughly 1600 quotes by or about Albert Einstein; most of the quotes are by Einstein himself, but the book also includes more than 50 pages towards the end of the book containing quotes by others about him. I was probably not in the main target group, but I do like good quote collections and I figured there might be enough good quotes in the book for it to make sense for me to give it a try. On the other hand after having read the foreword by Freeman Dyson I knew there would probably be a lot of quotes in the book which I probably wouldn’t find too interesting; I’m not really sure why I should give a crap if/why a guy who died more than 60 years ago and whom I have never met and never will was having an affair during the early 1920s, or why I should care what Einstein thought about his mother or his ex-wife, but if that kind of stuff interests you the book has stuff about those kinds of things as well. My own interest in Einstein, such as it is, is mainly in ‘Einstein the scientist’ (and perhaps also in this particular context ‘Einstein the aphorist’), not ‘Einstein the father’ or ‘Einstein the husband’. I also don’t find the political views which he held to be very interesting, but again if you want to know what Einstein thought about things like Zionism, pacifism, and world government the book includes quotes about such topics as well.

Overall I should say that I was a little underwhelmed by the book and the quotes it includes, but I would also note that people who are interested in knowing more about Einstein will likely find a lot of valuable source material here, and that I did give the book 3 stars on goodreads. I did learn a lot of new things about Einstein by reading the book, but this is not surprising given how little I knew about him before I started reading the book; for example I had no idea that he was offered the presidency of Israel a few years before his death. I noticed only two quotes which were included more than once (a quote on pages 187-188 was repeated on page 453, and a quote on page 295 was repeated on page 455), and although I cannot guarantee that there aren’t any other repeats almost all quotes included in the book are unique, in the sense that they’re only included once in the coverage. However it should also be mentioned in this context that there are a few quotes on specific themes which are very similar to other quotes included elsewhere in the coverage. I do consider this unavoidable considering the number of quotes included, though.

I have included some sample quotes from the book below – I have tried to include quotes on a wide variety of topics. All quotes without a source below are sourced quotes by Einstein (the book also contains a small collection of quotes ‘attributed to Einstein’, many of which are either not sourced or sourced in such a manner that Calaprice did not feel convinced that the quote was actually by Einstein – none of the quotes from that part of the book’s coverage are included below).

“When a blind beetle crawls over the surface of a curved branch, it doesn’t notice that the track it has covered is indeed curved. I was lucky enough to notice what the beetle didn’t notice.” (“in answer to his son Eduard’s question about why he is so famous, 1922.”)

“The most valuable thing a teacher can impart to children is not knowledge and understanding per se but a longing for knowledge and understanding” (see on a related note also Susan Engel’s book – US)

“Teaching should be such that what is offered is perceived as a valuable gift and not as a hard duty.”

“I am not prepared to accept all his conclusions, but I consider his work an immensely valuable contribution to the science of human behavior.” (Einstein said this about Sigmund Freud during an interview. Yeah…)

“I consider him the best of the living writers.” (on Bertrand Russell. Russell incidentally also admired Einstein immensely – the last part of the book, including quotes by others about Einstein, includes this one by him: “Of all the public figures that I have known, Einstein was the one who commanded my most wholehearted admiration.”)

“I cannot understand the passive response of the whole civilized world to this modern barbarism. Doesn’t the world see that Hitler is aiming for war?” (1933. Related link.)

“Children don’t heed the life experience of their parents, and nations ignore history. Bad lessons always have to be learned anew.”

“Few people are capable of expressing with equanimity opinions that differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.”

“Sometimes one pays most for things one gets for nothing.”

“Thanks to my fortunate idea of introducing the relativity principle into physics, you (and others) now enormously overrate my scientific abilities, to the point where this makes me quite uncomfortable.” (To Arnold Sommerfeld, 1908)

“No fairer destiny could be allotted to any physical theory than that it should of itself point out the way to the introduction of a more comprehensive theory, in which it lives on as a limiting case.”

“Mother nature, or more precisely an experiment, is a resolute and seldom friendly referee […]. She never says “yes” to a theory; but only “maybe” under the best of circumstances, and in most cases simply “no”.”

“The aim of science is, on the one hand, a comprehension, as complete as possible, of the connection between the sense experiences in their totality, and, on the other hand, the accomplishment of this aim by the use of a minimum of primary concepts and relations.” A related quote from the book: “Although it is true that it is the goal of science to discover rules which permit the association and foretelling of facts, this is not its only aim. It also seeks to reduce the connections discovered to the smallest possible number of mutually independent conceptual elements. It is in this striving after the rational unification of the manifold that it encounters its greatest successes.”

“According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables – the coordinates of space and time.”

“One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.”

“”Why should I? Everybody knows me there” (upon being told by his wife to dress properly when going to the office). “Why should I? No one knows me there” (upon being told to dress properly for his first big conference).”

“Marriage is but slavery made to appear civilized.”

“Nothing is more destructive of respect for the government and the law of the land than passing laws that cannot be enforced.”

“Einstein would be one of the greatest theoretical physicists of all time even if he had not written a single line on relativity.” (Max Born)

“Einstein’s [violin] playing is excellent, but he does not deserve his world fame; there are many others just as good.” (“A music critic on an early 1920s performance, unaware that Einstein’s fame derived from physics, not music. Quoted in Reiser, Albert Einstein, 202-203″)

April 12, 2016 Posted by | Books, History, Physics, Quotes/aphorisms, Science | Leave a comment

Quotes

i. “By all means think yourself big but don’t think everyone else small” (‘Notes on Flyleaf of Fresh ms. Book’, Scott’s Last Expedition. See also this).

ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)

iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).

iv. “A first-rate theory predicts; a second-rate theory forbids and a third-rate theory explains after the event.” (Aleksander Isaakovich Kitaigorodski)

v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).

vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)

vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)

viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)

ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)

x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)

xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)

xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).

xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)

xiv. “There is more self-love than love in jealousy.” (-ll-)

xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)

xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)

xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)

xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)

xix. “Give every man thy ear, but few thy voice” (Shakespeare).

xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)

 

November 22, 2015 Posted by | Books, Mathematics, Medicine, Philosophy, Quotes/aphorisms, Science, Statistics | Leave a comment

The Nature of Statistical Evidence

Here’s my goodreads review of the book.

As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:

Loeve
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.

I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.

On how statistical methods are related to experimental science:

“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”

On mathematics and axiomatic systems (the book goes into much more detail than this):

“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”

It’s all about models:

“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”

But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…

“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]

What should we expect to get out of mathematical and statistical theories of inference?

“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”

On disagreements about the nature of statistical evidence:

“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”

Is mathematics a science?

“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”

Then what about statistics – is statistics a science?

“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”

I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.

Below I’ve added a few links to other stuff also covered in the book:
Propositional calculus.
Kolmogorov’s axioms.
Neyman-Pearson lemma.
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).

October 6, 2015 Posted by | Books, Mathematics, Papers, Philosophy, Science, Statistics | 4 Comments

Introduction to Systems Analysis: Mathematically Modeling Natural Systems (I)

“This book was originally developed alongside the lecture Systems Analysis at the Swiss Federal Institute of Technology (ETH) Zürich, on the basis of lecture notes developed over 12 years. The lecture, together with others on analysis, differential equations and linear algebra, belongs to the basic mathematical knowledge imparted on students of environmental sciences and other related areas at ETH Zürich. […] The book aims to be more than a mathematical treatise on the analysis and modeling of natural systems, yet a certain set of basic mathematical skills are still necessary. We will use linear differential equations, vector and matrix calculus, linear algebra, and even take a glimpse at nonlinear and partial differential equations. Most of the mathematical methods used are covered in the appendices. Their treatment there is brief however, and without proofs. Therefore it will not replace a good mathematics textbook for someone who has not encountered this level of math before. […] The book is firmly rooted in the algebraic formulation of mathematical models, their analytical solution, or — if solutions are too complex or do not exist — in a thorough discussion of the anticipated model properties.”

I finished the book yesterday – here’s my goodreads review (note that the first link in this post was not to the goodreads profile of the book for the reason that goodreads has listed the book under the wrong title). I’ve never read a book about ‘systems analysis’ before, but as I also mention in the goodreads review it turned out that much of this stuff was stuff I’d seen before. There are 8 chapters in the book. Chapter one is a brief introductory chapter, the second chapter contains a short overview of mathematical models (static models, dynamic models, discrete and continuous time models, stochastic models…), the third chapter is a brief chapter about static models (the rest of the book is about dynamic models, but they want you to at least know the difference), the fourth chapter deals with linear (differential equation) models with one variable, chapter 5 extends the analysis to linear models with several variables, chapter 6 is about non-linear models (covers e.g. the Lotka-Volterra model (of course) and the Holling-Tanner model (both were covered in Ecological Dynamics, in much more detail)), chapter 7 deals briefly with time-discrete models and how they are different from continuous-time models (I liked Gurney and Nisbet’s coverage of this stuff a lot better, as that book had a lot more details about these things) and chapter 8 concludes with models including both a time- and a space-dimension, which leads to coverage of concepts such as mixing and transformation, advection, diffusion and exchange in a model context.

How to derive solutions to various types of differential equations, how to calculate eigenvalues and what these tell you about the model dynamics (and how to deal with them when they’re imaginary), phase diagrams/phase planes and topographical maps of system dynamics, fixed points/steady states and their properties, what’s an attractor?, what’s hysteresis and in which model contexts might this phenomenon be present?, the difference between homogeneous and non-homogeneous differential equations and between first order- and higher-order differential equations, which role do the initial conditions play in various contexts?, etc. – it’s this kind of book. Applications included in the book are varied; some of the examples are (as already mentioned) derived from the field of ecology/mathematical biology (there are also e.g. models of phosphate distribution/dynamics in lakes and models of fish population dynamics), others are from chemistry (e.g. models dealing with gas exchange – Fick’s laws of diffusion are e.g. covered in the book, and they also talk about e.g. Henry’s law), physics (e.g. the harmonic oscillator, the Lorenz model) – there are even a few examples from economics (e.g. dealing with interest rates). As they put it in the introduction, “Although most of the examples used here are drawn from the environmental sciences, this book is not an introduction to the theory of aquatic or terrestrial environmental systems. Rather, a key goal of the book is to demonstrate the virtually limitless practical potential of the methods presented.” I’m not sure if they succeeded, but it’s certainly clear from the coverage that you can use the tools they cover in a lot of different contexts.

I’m not quite sure how much mathematics you’ll need to know in order to read and understand this book on your own. In the coverage they seem to me to assume some familiarity with linear algebra, multi-variable calculus, complex analysis (/related trigonometry) (perhaps also basic combinatorics – for example factorials are included without comments about how they work). You should probably take the authors at their words when they say above that the book “will not replace a good mathematics textbook for someone who has not encountered this level of math before”. A related observation is also that regardless of whether you’ve seen this sort of stuff before or not, this is probably not the sort of book you’ll be able to read in a day or two.

I think I’ll try to cover the book in more detail (with much more specific coverage of some main points) tomorrow.

February 11, 2015 Posted by | Books, Mathematics, Science | Leave a comment

A couple of abstracts

Abstract:

“There are both costs and benefits associated with conducting scientific- and technological research. Whereas the benefits derived from scientific research and new technologies have often been addressed in the literature (for a good example, see Evenson et al., 1979), few of the major non-monetary societal costs associated with major expenditures on scientific research and technology have however so far received much attention.

In this paper we investigate one of the major non-monetary societal cost variables associated with the conduct of scientific and technological research in the United States, namely the suicides resulting from research activities. In particular, in this paper we analyze the association between scientific- and technological research expenditure patterns and the number of suicides committed using one of the most common suicide methods, namely that of hanging, strangulation and suffocation (-HSS). We conclude from our analysis that there’s a very strong association between scientific research expenditures in the US and the frequency of suicides committed using the HSS method, and that this relationship has been stable for at least a decade. An important aspect in the context of the association is the precise mechanisms through which the increase in HHSs takes place. Although the mechanisms are still not well-elucidated, we suggest that one of the important components in this relationship may be judicial research, as initial analyses of related data have suggested that this variable may be important. We argue in the paper that our initial findings in this context provide impetus for considering this pathway a particularly important area of future research in this field.”

Key findings:

Graph 1:

us-spending-on-science-space-and-technology_suicides-by-hanging-strangulation-and-suffocation

Abstract:

“Murders by bodily force (-Mbf) make up a substantial number of all homicides in the US. Previous research on the topic has shown that this criminal activity causes the compromise of some common key biological functions in victims, such as respiration and cardiac function, and that many people with close social relationships with the victims are psychosocially affected as well, which means that this societal problem is clearly of some importance.

Researchers have known for a long time that the marital state of the inhabitants of the state of Mississippi and the dynamics of this variable have important nation-wide effects. Previous research has e.g. analyzed how the marriage rate in Mississippi determines the US per capita consumption of whole milk. In this paper we investigate how the dynamics of Mississippian marital patterns relate to the national Mbf numbers. We conclude from our analysis that it is very clear that there’s a strong association between the divorce rate in Mississippi and the national level of Mbf. We suggest that the effect may go through previously established channels such as e.g. milk consumption, but we also note that the precise relationship has yet to be elucidated and that further research on this important topic is clearly needed.”

Key findings:

divorce-rate-in-mississippi_murders-by-bodily-force

This abstract is awesome as well, but I didn’t write it…

The ‘funny’ part is that I could actually easily imagine papers not too dissimilar to the ones just outlined getting published in scientific journals. Indeed, in terms of the structure I’d claim that many published papers are exactly like this. They do significance testing as well, sure, but hunting down p-values is not much different from hunting down correlations and it’s quite easy to do both. If that’s all you have, you haven’t shown much.

July 29, 2014 Posted by | Random stuff, Science, Statistics | 2 Comments

The Structure of Scientific Revolutions

I read the book yesterday. Here’s what I wrote on goodreads:

I’m not rating this, but I’ll note that ‘it’s an interesting model.’

I’d only really learned (…heard?) about Kuhn’s ideas through cultural osmosis (and/or perhaps a brief snippet of his work in HS? Maybe. I honestly can’t remember if we read Kuhn back then…). It’s worth actually reading the book, and I should probably have done that a long time ago.”

I was thinking about just quoting extensively from the work in this post in order to make clear what the book is about, but I’m not sure this is actually the best way to proceed. I know some readers of this blog have already read Kuhn, so it may in some sense be more useful if I say a little bit about what I think about the things he’s said, rather than focusing only on what he’s said in the work. I’ve tried to make this the sort of post that can be read and enjoyed both by people who have not read Kuhn, and by people who have, though I may not have been successful. That said, I have felt it necessary to include at least a few quotes from the work along the way in the following, in order not to misrepresent Kuhn too much.

So anyway, ‘the general model’ Kuhn has of science is one where there are three states of science. ‘Normal science’ is perhaps the most common state (this is actually not completely clear as I don’t think he ever explicitly says as much (I may be wrong), and the inclusion of concepts like ‘mini-revolutions’ (the ‘revolutions can happen on many levels’-part) makes things even less clear, but I don’t think this is an unreasonable interpretation), where scientists in a given field has adopted a given paradigm and work and tinker with stuff within that paradigm, exploring all the nooks and crannies: “‘normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for it further practice.” Exactly what a paradigm is is still a bit unclear to me, as he seems to me to be using the term in a lot of different ways (“One sympathetic reader, who shares my conviction that ‘paradigm’ names the central philosophical elements of the book, prepared a partial analytic index and concluded that the term is used in at least twenty-two different ways.” – a quote from the postscript).

So there’s ‘normal science’, where everything is sort of proceeding according to plan. And then there are two other states: A state of crisis, and a state of revolution. A crisis state is a state which comes about when the scientists working in their nooks and crannies gradually come to realize that perhaps the model of the world they’ve been using (‘paradigm’) may not be quite right. Something is off, the model has problems explaining some of the results – so they start questioning some of the defining assumptions. During a crisis scientists become less constrained by the paradigm when looking at the world, research becomes in some sense more random; a lot of new ideas pop up as to how to deal with the problem(s), and at some point a scientific revolution resolves the crisis – a new model replaces the old one, and the scientists can go back to doing ‘normal science’ work, which is now defined by the new paradigm rather than the old one. Young people and/or people not too closely affiliated with the old model/paradigm are, Kuhn argues, more likely to come up with the new idea that will resolve the problem which caused the crisis, and young people and new people in the field are more likely than their older colleagues to ‘convert’ to the new way of thinking. Such dynamics are actually, he adds, part of what keeps ‘normal science’ going and makes it able to proceed in the manner it does; scientists are skeptical people, and if scientists were to question the basic assumptions of the field they’re working in all the time, they’d never be able to specialize in the way they do, exploring all the nooks and crannies; they’d be spending all their time arguing about the basics instead. It should be noted that crises don’t always lead to a resolution; sometimes the crisis can be resolved without it. He also argues that sometimes a revolution can take place without a major crisis, though the existence of such crises he seems to think important to his overall thesis. Crises and revolutions need not be the result of annoying data that does not fit – they may also be the result of e.g. technological advances, like the development of new tools and technology which can e.g. enable scientists to see things they did not use to be able to see. Sometimes the theory upon which a new paradigm is based was presented much earlier, during the ‘normal science’ phase, but nobody took the theory seriously back then because the problems that lead to crisis had not really manifested at that time.

Scientists make progress when they’re doing normal science, in the sense that they tend to learn a lot of new stuff about the world during these phases. But revolutions can both overturn some of that progress (‘that was not the right way to think about these things’), and it can lead to further progress and new knowledge. An important thing to note here is that how paradigms change is in part a sociological process; part of what leads to change is the popularity of different models. Kuhn argues that scientists tend to prefer new paradigms which solves many of the same problems the old paradigm did, as well as some of those troublesome problems which lead to the crisis – so it’s not like revolutions will necessarily lead people back to square one, with all the scientific progress made during the preceding ‘normal science’ period wiped out. But there are some problems. Textbooks, Kuhn argues, are written by the winners (i.e. the people who picked the right paradigm and get to write textbooks), and so they will often deliberately and systematically downplay the differences between the scientists working in the field now and the scientists working in the field – or what came before it (the fact that normal science is conducted at all is a sign of maturity of a field, Kuhn notes) – in the past, painting a picture of gradual, cumulative progress in the field (gigantum humeris insidentes) which perhaps is not the right way to think about what has actually happened. Sometimes a revolution will make scientists stop asking questions they used to ask, without any answer being provided by the new paradigm; there are costs as well as benefits associated with the dramatical change that takes place during scientific revolutions:

“In the process the community will sustain losses. Often some old problems must be banished. Frequently, in addition, revolution narrows the scope of the community’s professional concerns, increases the extent of its specialization, and attenuates its communication with other groups, both scientific and lay. Though science surely grows in depth, it may not grow in breadth as well. If it does so, that breadth is manifest mainly in the proliferation of scientific specialties, not in the scope of any single specialty alone. Yet despite these and other losses to the individual communities, the nature of such communities provides a virtual guarantee that both the list of problems solved by science and the precision of individual problem-solutions will grow and grow. At least, the nature of the community provides such a guarantee if there is any way at all in which it can be provided. What better criterion than the decision of the scientific group could there be?”

I quote this part also to focus in on an area where I am in disagreement with Kuhn – this relates to his implicit assumption that scientific paradigms (whatever that term may mean) are decided by scientists alone. Certainly this is not the case to the extent that the scientific paradigms equal the rules of the game for conducting science. This is actually one of several major problems I have with the model. Doing science requires money, and people who pay for the stuff will have their own ideas about what you can get away with asking questions about. What the people paying for the stuff have allowed scientists to investigate has changed over time, but some things have changed more than others and what might be termed ‘the broader cultural dimension’ seems important to me; those variables may play a very important role in deciding where science and scientists may or may not go, and although the book deals with sociological stuff in quite a bit of detail, the exclusion of broader cultural and political factors in the model is ‘a bit’ of a problem to me. Scientists are certainly not unconstrained today by such ‘external factors’, and/but most scientists alive today will not face anywhere near the same kinds of constraints on their research as their forebears living 300 years ago did – religion is but one of several elephants in the room (and that one is still really important in some parts of the world, though the role it plays may have changed).

Another big problem is how to test a model like this. Kuhn doesn’t try. He only talks about anecdotes; specific instances, examples which according to him illustrates a broader point. I’m not sure his model is completely stupid, but there are alternative ways to think about these things, including mental models with variables omitted from his model which likely lead to a better appreciation of the processes involved. Money and politics, culture/religion, coalition building and the dynamics of negotiation, things like that. How do institutions fit into all of this? These things have very important effects on how science is conducted, and the (near-)exclusion of them in a model of how to conceptualize the scientific process at least somewhat inspired by sociology and related stuff seems more than a bit odd to me. I’m also not completely clear on why this model is even useful, what it adds. You can presumably approximate pretty much any developmental process by some punctuated equilibrium model like this – it seems to me to be a bit like doing a Taylor expansion, if you add enough terms it’ll look plausible, especially if you add ‘crises’ as well to the model to explain the cases where no clear trend is observable. Stable development is normal science, discontinuities are revolutions, high-variance areas are crises; framed that way you suddenly realize that it’s very convenient indeed for Kuhn that crises don’t always lead to revolutions and that revolutions need not be preceded by crises – if those requirements were imposed on the other hand, the underling data-generating-process would at least be somewhat constrained by the model (though how to actually measure ‘progress’ and ‘variance’ are still questions in need of an answer). I know that the model outlined would not explain a set of completely randomly generated numbers, but in this context I think it would do quite well – even if it’s arguable if it has actually explained anything at all. Add to the model imprecise language – 22 definitions… – and the observation that the model builder seems to be cherry-picking examples to make specific points, what you end up with is, well…

The book was sort of interesting, but, yeah… I feel slightly tempted to revise my goodreads review after having written this post, but I’m not sure I will – it was worth reading the book and I probably should have done it a long time ago, even if only to learn what all the fuss was about (it’s my impression, which may be faulty, that this one is (‘considered to be’) one of the must-reads in this genre). Some of the hypotheses derived from the model seem perhaps to be more testable than others (‘young people are more likely to spark important development in a field’), but even in those cases things get messy (‘what do you mean by ‘important’ and who is to decide that? ‘how young?’). A problem with the model which I have not yet mentioned is incidentally that his model of how interactions between fields and the scientists in those fields take place and proceed to me seems to leave a lot to be desired; the model is very ‘field-centric’. How different fields (which are not about to combine into one), and the people working in them, interact with each other may be yet another very important variable not explored in the model.

As a historical narrative about a few specific important scientific events in the past, Kuhn’s account probably isn’t bad (and it has some interesting observations related to the history of science which I did not know). As ‘a general model of how science works’, well…

June 13, 2014 Posted by | Books, Philosophy, Science | Leave a comment

The Origin and Evolution of Cultures (V)

This will be my last post about the book. Go here for a background post and my overall impression of the book – I’ll limit this post to coverage of the ‘Simple Models of Complex Phenomena’-chapter which I mentioned in that post, as well as a few observations from the introduction to part 5 of the book, which talks a little bit about what the chapter is about in general terms. The stuff they write in the chapter is in a way a sort of overview over the kind of approach to things which you may well end up adopting unconsciously if you’re working in a field like economics or ecology and a defence of such an approach; I’ve as mentioned in the previous post about the book talked about these sorts of things before, but there’s some new stuff in here as well. The chapter is written in the context of Boyd and Richerson’s coverage of their ‘Darwinian approach to evolution’, but many of the observations here are of a much more general nature and relate to the application of statistical and mathematical modelling in a much broader context; and some of those observations that do not directly relate to broader contexts still do as far as I can see have what might be termed ‘generalized analogues’. The chapter coverage was actually interesting enough for me to seriously consider reading a book or two on these topics (books such as this one), despite the amount of work I know may well be required to deal with a book like this.

I exclude a lot of stuff from the chapter in this post, and there are a lot of other good chapters in the book. Again, you should read this book.

Here’s the stuff from the introduction:

“Chapter 19 is directed at those in the social sciences unfamiliar with a style of deploying mathematical models that is second nature to economists, evolutionary biologists, engineers, and others. Much science in many disciplines consists of a toolkit of very simple mathematical models. To many not familiar with the subtle art of the simple model, such formal exercises have two seemingly deadly flaws. First, they are not easy to follow. […] Second, motivation to follow the math is often wanting because the model is so cartoonishly simple relative to the real world being analyzed. Critics often level the charge ‘‘reductionism’’ with what they take to be devastating effect. The modeler’s reply is that these two criticisms actually point in opposite directions and sum to nothing. True, the model is quite simple relative to reality, but even so, the analysis is difficult. The real lesson is that complex phenomena like culture require a humble approach. We have to bite off tiny bits of reality to analyze and build up a more global knowledge step by patient step. […] Simple models, simple experiments, and simple observational programs are the best the human mind can do in the face of the awesome complexity of nature. The alternatives to simple models are either complex models or verbal descriptions and analysis. Complex models are sometimes useful for their predictive power, but they have the vice of being difficult or impossible to understand. The heuristic value of simple models in schooling our intuition about natural processes is exceedingly important, even when their predictive power is limited. […] Unaided verbal reasoning can be unreliable […] The lesson, we think, is that all serious students of human behavior need to know enough math to at least appreciate the contributions simple mathematical models make to the understanding of complex phenomena. The idea that social scientists need less math than biologists or other natural scientists is completely mistaken.”

And below I’ve posted the chapter coverage:

“A great deal of the progress in evolutionary biology has resulted from the deployment of relatively simple theoretical models. Staddon’s, Smith’s, and Maynard Smith’s contributions illustrate this point. Despite their success, simple models have been subjected to a steady stream of criticism. The complexity of real social and biological phenomena is compared to the toylike quality of the simple models used to analyze them and their users charged with unwarranted reductionism or plain simplemindedness.
This critique is intuitively appealing—complex phenomena would seem to require complex theories to understand them—but misleading. In this chapter we argue that the study of complex, diverse phenomena like organic evolution requires complex, multilevel theories but that such theories are best built from toolkits made up of a diverse collection of simple models. Because individual models in the toolkit are designed to provide insight into only selected aspects of the more complex whole, they are necessarily incomplete. Nevertheless, students of complex phenomena aim for a reasonably complete theory by studying many related simple models. The neo-Darwinian theory of evolution provides a good example: fitness-optimizing models, one and multiple locus genetic models, and quantitative genetic models all emphasize certain details of the evolutionary process at the expense of others. While any given model is simple, the theory as a whole is much more comprehensive than any one of them.”

“In the last few years, a number of scholars have attempted to understand the processes of cultural evolution in Darwinian terms […] The idea that unifies all this work is that social learning or cultural transmission can be modeled as a system of inheritance; to understand the macroscopic patterns of cultural change we must understand the microscopic processes that increase the frequency of some culturally transmitted variants and reduce the frequency of others. Put another way, to understand cultural evolution we must account for all of the processes by which cultural variation is transmitted and modified. This is the essence of the Darwinian approach to evolution.”

“In the face of the complexity of evolutionary processes, the appropriate strategy may seem obvious: to be useful, models must be realistic; they should incorporate all factors that scientists studying the phenomena know to be important. This reasoning is certainly plausible, and many scientists, particularly in economics […] and ecology […], have constructed such models, despite their complexity. On this view, simple models are primitive, things to be replaced as our sophistication about evolution grows. Nevertheless, theorists in such disciplines as evolutionary biology and economics stubbornly continue to use simple models even though improvements in empirical knowledge, analytical mathematics, and computing now enable them to create extremely elaborate models if they care to do so. Theorists of this persuasion eschew more detailed models because (1) they are hard to understand, (2) they are difficult to analyze, and (3) they are often no more useful for prediction than simple models. […] Detailed models usually require very large amounts of data to determine the various parameter values in the model. Such data are rarely available. Moreover, small inaccuracies or errors in the formulation of the model can produce quite erroneous predictions. The temptation is to ‘‘tune’’ the model, making small changes, perhaps well within the error of available data, so that the model produces reasonable answers. When this is done, any predictive power that the model might have is due more to statistical fitting than to the fact that it accurately represents actual causal processes. It is easy to make large sacrifices of understanding for small gains in predictive power.”

“In the face of these difficulties, the most useful strategy will usually be to build a variety of simple models that can be completely understood but that still capture the important properties of the processes of interest. Liebenstein (1976: ch. 2) calls such simple models ‘‘sample theories.’’ Students of complex and diverse subject matters develop a large body of models from which ‘‘samples’’ can be drawn for the purpose at hand. Useful sample theories result from attempts to satisfy two competing desiderata: they should be simple enough to be clearly and completely grasped, and at the same time they should reflect how real processes actually do work, at least to some approximation. A systematically constructed population of sample theories and combinations of them constitutes the theory of how the whole complex process works. […] If they are well designed, they are like good caricatures, capturing a few essential features of the problem in a recognizable but stylized manner and with no attempt to represent features not of immediate interest. […] The user attempts to discover ‘‘robust’’ results, conclusions that are at least qualitatively correct, at least for some range of situations, despite the complexity and diversity of the phenomena they attempt to describe. […] Note that simple models can often be tested for their scientific content via their predictions even when the situation is too complicated to make practical predictions. Experimental or statistical controls often make it possible to expose the variation due to the processes modeled, against the background of ‘‘noise’’ due to other ones, thus allowing a ceteris paribus prediction for purposes of empirical testing.”

“Generalized sample theories are an important subset of the simple sample theories used to understand complex, diverse problems. They are designed to capture the qualitative properties of the whole class of processes that they are used to represent, while more specialized ones are used for closer approximations to narrower classes of cases. […] One might agree with the case for a diverse toolkit of simple models but still doubt the utility of generalized sample theories. Fitness-maximizing calculations are often used as a simple caricature of how selection ought to work most of the time in most organisms to produce adaptations. Does such a generalized sample theory have any serious scientific purpose? Some might argue that their qualitative kind of understanding is, at best, useful for giving nonspecialists a simplified overview of complicated topics and that real scientific progress still occurs entirely in the construction of specialized sample theories that actually predict. A sterner critic might characterize the attempt to construct generalized models as loose speculation that actually inhibits the real work of discovering predictable relationships in particular systems. These kinds of objections implicitly assume that it is possible to do science without any kind of general model. All scientists have mental models of the world. The part of the model that deals with their disciplinary specialty is more detailed than the parts that represent related areas of science. Many aspects of a scientist’s mental model are likely to be vague and never expressed. The real choice is between an intuitive, perhaps covert, general theory and an explicit, often mathematical, one. […] To insist upon empirical science in the style of physics is to insist upon the impossible. However, to give up on empirical tests and prediction would be to abandon science and retreat to speculative philosophy. Generalized sample theories normally make only limited qualitative predictions. The logistic model of population growth is a good elementary example. At best, it is an accurate model only of microbial growth in the laboratory. However, it captures something of the biology of population growth in more complex cases. Moreover, its simplicity makes it a handy general model to incorporate into models that must also represent other processes such as selection, and intra- and interspecific competition. If some sample theory is consistently at variance with the data, then it must be modified. The accumulation of these kinds of modifications can eventually alter general theory […] A generalized model is useful so long as its predictions are qualitatively correct, roughly conforming to the majority of cases. It is helpful if the inevitable limits of the model are understood. It is not necessarily an embarrassment if more than one alternative formulation of a general theory, built from different sample models, is more or less equally correct. In this case, the comparison of theories that are empirically equivalent makes clearer what is at stake in scientific controversies and may suggest empirical and theoretical steps toward a resolution.”

“The thorough study of simple models includes pressing them to their extreme limits. This is especially useful at the second step of development, where simple models of basic processes are combined into a candidate generalized model of an interesting question. There are two related purposes in this exercise. First, it is helpful to have all the implications of a given simple model exposed for comparative purposes, if nothing else. A well-understood simple sample theory serves as a useful point of comparison for the results of more complex alternatives, even when some conclusions are utterly ridiculous. Second, models do not usually just fail; they fail for particular reasons that are often very informative. Just what kinds of modifications are required to make the initially ridiculous results more nearly reasonable? […]  The exhaustive analysis of many sample models in various combinations is also the main means of seeking robust results (Wimsatt, 1981). One way to gain confidence in simple models is to build several models embodying different characterizations of the problem of interest and different simplifying assumptions. If the results of a model are robust, the same qualitative results ought to obtain for a whole family of related models in which the supposedly extraneous details differ. […] Similarly, as more complex considerations are introduced into the family of models, simple model results can be considered robust only if it seems that the qualitative conclusion holds for some reasonable range of plausible conditions.”

“A plausibility argument is a hypothetical explanation having three features in common with a traditional hypothesis: (1) a claim of deductive soundness, of in-principle logical sufficiency to explain a body of data; (2) sufficient support from the existing body of empirical data to suggest that it might actually be able to explain a body of data as well as or better than competing plausibility arguments; and (3) a program of research that might distinguish between the claims of competing plausibility arguments. The differences are that competing plausibility arguments (1) are seldom mutually exclusive, (2) can seldom be rejected by a single sharp experimental test (or small set of them), and (3) often end up being revised, limited in their generality or domain of applicability, or combined with competing arguments rather than being rejected. In other words, competing plausibility arguments are based on the claims that a different set of submodels is needed to achieve a given degree of realism and generality, that different parameter values of common submodels are required, or that a given model is correct as far as it goes, but applies with less generality, realism, or predictive power than its proponents claim. […] Human sociobiology provides a good example of a plausibility argument. The basic premise of human sociobiology is that fitness-optimizing models drawn from evolutionary biology can be used to understand human behavior. […] We think that the clearest way to address the controversial questions raised by competing plausibility arguments is to try to formulate models with parameters such that for some values of the critical parameters the results approximate one of the polar positions in such debates, while for others the model approximates the other position.”

“A well-developed plausibility argument differs sharply from another common type of argument that we call a programmatic claim. Most generally, a programmatic claim advocates a plan of research for addressing some outstanding problem without, however, attempting to construct a full plausibility argument. […] An attack on an existing, often widely accepted, plausibility argument on the grounds that the plausibility argument is incomplete is a kind of programmatic claim. Critiques of human sociobiology are commonly of this type. […] The criticism of human sociobiology has far too frequently depended on mere programmatic claims (often invalid ones at that, as when sociobiologists are said to ignore the importance of culture and to depend on genetic variation to explain human differences). These claims are generally accompanied by dubious burden-of-proof arguments. […] We have argued that theory about complex-diverse phenomena is necessarily made up of simple models that omit many details of the phenomena under study. It is very easy to criticize theory of this kind on the grounds that it is incomplete (or defend it on the grounds that it one day will be much more complete). Such criticism and defense is not really very useful because all such models are incomplete in many ways and may be flawed because of it. What is required is a plausibility argument that shows that some factor that is omitted could be sufficiently important to require inclusion in the theory of the phenomenon under consideration, or a plausible case that it really can be neglected for most purposes. […] It seems to us that until very recently, ‘‘nature-nurture’’ debates have been badly confused because plausibility arguments have often been taken to have been successfully countered by programmatic claims. It has proved relatively easy to construct reasonable and increasingly sophisticated Darwinian plausibility arguments about human behavior from the prevailing general theory. It is also relatively easy to spot the programmatic flaws in such arguments […] The problem is that programmatic objections have not been taken to imply a promise to deliver a full plausibility claim. Rather, they have been taken as a kind of declaration of independence of the social sciences from biology. Having shown that the biological theory is in principle incomplete, the conclusion is drawn that it can safely be ignored.”

“Scientists should be encouraged to take a sophisticated attitude toward empirical testing of plausibility arguments […] Folk Popperism among scientists has had the very desirable result of reducing the amount of theory-free descriptive empiricism in many complex-diverse disciplines, but it has had the undesirable effect of encouraging a search for simple mutually exclusive hypotheses that can be accepted or rejected by single experiments. By our argument, very few important problems in evolutionary biology or the social sciences can be resolved in this way. Rather, individual empirical investigations should be viewed as weighing marginally for or against plausibility arguments. Often, empirical studies may themselves discover or suggest new plausibility arguments or reconcile old ones.”

“We suspect that most evolutionary biologists and philosophers of biology on both sides of the dispute would pretty much agree with the defense of the simple models strategy presented here. To reject the strategy of building evolutionary theory from collections of simple models is to embrace a kind of scientific nihilism in which there is no hope of achieving an understanding of how evolution works. On the other hand, there is reason to treat any given model skeptically. […] It may be possible to defend the proposition that the complexity and diversity of evolutionary phenomena make any scientific understanding of evolutionary processes impossible. Or, even if we can obtain a satisfactory understanding of particular cases of evolution, any attempt at a general, unified theory may be impossible. Some critics of adaptationism seem to invoke these arguments against adaptationism without fully embracing them. The problem is that alternatives to adaptationism must face the same problem of diversity and complexity that Darwinians use the simple model strategy to finesse. The critics, when they come to construct plausibility arguments, will also have to use relatively simple models that are vulnerable to the same attack. If there is a vulgar sociobiology, there is also a vulgar criticism of sociobiology.”

June 6, 2014 Posted by | Anthropology, Biology, Books, culture, Ecology, Economics, Evolutionary biology, Mathematics, Science | Leave a comment

What Did the Romans Know? An Inquiry into Science and Worldmaking (II)

I finished the book.

I did not have a lot of nice things to say about the second half of it on goodreads. I felt it was a bad idea to blog the book right after I’d finished it (I occasionally do this) because I was actually feeling angry at the author at that point, and I hope that after having now distanced myself a bit from it perhaps I’m now better able to evaluate the book.

The author is a classics professor writing about science. I must say that at this point I have now had some bad experiences with reading authors with backgrounds in the humanities writing about science and scientific history – reading this book at one point reminded me of the experience I had reading the Engelhardt & Jensen book. It also reminded me of this comic – I briefly had a ‘hmmmmm…. – Is the reason why I have a hard time following some of this stuff the simple one that the author is a fool who doesn’t know what he’s talking about?‘-experience. It’s probably not fair to judge the book as harshly as I did in my goodreads review (or to link to that comic), and this guy is a hell of a lot smarter than Engelhardt and Jensen are (which should not surprise you – classicists are smart), but I frankly felt during the second half of this work that the author was wasting my time and I get angry when people do that. He spends inordinate amounts of time discussing trivial points which to me seem only marginally related to the topic at hand – he’d argue they’re not ‘marginally related’ of course, but I’d argue that that’s at least in part because he’s picked the wrong title for his book (see also the review to which I linked in the previous post). There’s a lot of stuff in the second half about things like historiography and ontology, discussions about the proper truth concept to apply in this setting and things like that. Somewhat technical stuff, but certainly readable. I feel he’s spending lots of words and time on trivial and irrelevant points, and there are a couple of chapters where I’ve basically engaged in extensive fisking in the margin of the book. I don’t really want to cover all that stuff here.

I’ve added some observations from the second half of the book below, as well as some critical remarks. I’ve tried in this post to limit my coverage to the reasonably good stuff in there; if you get a good impression of the book based on the material included in this post I have to caution you that I did not think the book was very good. If you want to read the book because you’re curious to know more about ‘the wisdom of the ancients’, I’ll remind you that on the topic of science at least there simply is no such thing:

“Science is special because there is no ancient wisdom. The ancients were fools, by and large. I mean no disrespect, but if you wish to design a rifle by Aristotelian principles, or treat an illness via the Galenic system, you are a fool, following foolishness.”

Lehoux would, I am sure, disagree somewhat with that assessment (that the ancients were fools), in that he argues throughout the book that the ancients actually often could be argued to be reasonably justified in believing many of the things that they did. I’m not sure to which extent I agree with that assessment, but the argument he makes is not without some merit.

“That magnets attract because of sympathy had long been, and would long continue to be, the standard explanation for their efficacy. That they can be impeded by garlic is brought in to complete the pairing of forces, since strongly sympathetic things are generally also strongly antipathetic with respect to other objects. […] in both Plutarch and Ptolemy, garlic-magnets are being invoked as a familiar example to fill out the range of the powers of the two forces. Sympathy and antipathy, the author is saying, are common — just look at all the examples […] goat’s blood as an active substance is another trope of the sympathy-antipathy argument. […] washing the magnet in goat’s blood, a substance antipathetic to the kind of thing that robs magnets of their power, negates the original antipathetic power of the garlic, and so restores the magnets.[15] […] we should remember that — even for the eccentric empiricist — the test only becomes necessary under the artificial conditions I have created in this chapter.[36] We know the falsity of garlic-magnets so immediately that no test [feels necessary] […] We know exactly where the disproof lies — in experience — and we know that so powerfully as to simply leave it at that. The proof that it is false is empirical. It may be a strange kind of empirical argument that never needs to come to the lab, but it is still empirical for all that. On careful analysis we can argue that this empiricism is indirect […] Our experiences of magnets, and our experiences of garlic, are quietly but very firmly mediated by our understanding of magnets and our understanding of garlic, just as Plutarch’s experiences of those things were mediated by his own understandings. But this is exactly where we hit the big epistemological snag: our argument against the garlic-magnet antipathy is no stronger, and more importantly no more or less empirical, than Plutarch’s argument for it. […]

None of the experience claims in this chapter are disingenuous. Neither we nor Plutarch are avoiding a crucial test out of fear, credulity, or duplicity. We simply don’t need to get our hands dirty. This is in part because the idea of the test becomes problematized only when we realize that there are conflicting claims resting on identical evidential bases — only then does a crucial test even suggest itself. Otherwise, we simply have an epistemological blind spot. At the same time, we recognize (as Plutarch did) how useful and reliable our classification systems are, and so even as the challenge is raised, we remain pretty confident, deep down, about what would happen to the magnet in our kitchen. The generalized appeal to experience has a lot of force, and it still has the power to trick us into thinking that the so-called “empirically obvious” is more properly empirical than it is just obvious. […]

An important part of the point of this chapter is methodological. I have taken as my starting point a question put best by Bas van Fraassen: “Is there any rational way I could come to entertain, seriously, the belief that things are some way that I now classify as absurd?”[45] I have then tried to frame a way of understanding how we can deal with the many apparently — or even transparently — ridiculous claims of premodern science, and it is this: We should take them seriously at face value (within their own contexts). Indeed, they have the exact same epistemological foundations as many of our own beliefs about how the world works (within our own context).”

“On the ancient understanding, astrology covers a lot more ground than a modern newspaper horoscope does. It can account for everything from an individual’s personality quirks and dispositions to large-scale political and social events, to racial characteristics, crop yields, plagues, storms, and earthquakes. Its predictive and explanatory ranges include some of what is covered by the modern disciplines of psychology, economics, sociology, medicine, meteorology, biology, epidemiology, seismology, and more. […] Ancient astrology […] aspires to be […] personal, precise, and specific. It often claims that it can tell someone exactly what they are going to do, when they are going to do it, and why. It is a very powerful tool indeed. So powerful, in fact, that astrology may not leave people much room to make what they would see as their own decisions. On a strong reading of the power of the stars over human affairs, it may be the case that individuals do not have what could be considered to be free will. Accordingly, a strict determinism seems to have been associated quite commonly with astrology in antiquity.”

“Seneca […] cites the multiplicity of astrological causes as leading to uncertainty about the future and inaccuracy of prediction.[41] Where opponents of astrology were fond of parading famous mistaken predictions, Seneca preempts that move by admitting that mistakes not only can be made, but must sometimes be made. However, these are mistakes of interpretation only, and this raises an important point: we may not have complete predictive command of all the myriad effects of the stars and their combinations, but the effects are there nonetheless. Where in Ptolemy and Pliny the effects were moderated by external (i.e., nonastrological) causes, Seneca is saying that the internal effects are all-important, but impossible to control exhaustively. […] Astrology is, in the ancient discourses, both highly rational and eminently empirical. It is surprising how much evidence there was for it, and how well it sustained itself in the face of objections […] Defenders of astrology often wielded formidable arguments that need to be taken very seriously if we are to fully understand the roles of astrology in the worlds in which it operates. The fact is that most ancient thinkers who talk about it seem to think that astrology really did work, and this for very good reasons.” [Lehoux goes into a lot of detail about this stuff, but I decided against covering it in too much detail here.]

I did not have a lot of problems with the stuff covered so far, but this point in the coverage is where I start getting annoyed at the author, so I won’t cover much more of it. Here’s an example of the kind of stuff he covers in the later chapters:

“The pessimistic induction has many minor variants in its exact wording, but all accounts are agreed on the basic argument: if you look at the history of the sciences, you find many instances of successful theories that turn out to have been completely wrong. This means that the success of our current scientific theories is no grounds for supposing that those theories are right. […]

In induction, examples are collected to prove a general point, and in this case we conclude, from the fact that wrong theories have often been successful in the past, that our own successful theories may well be wrong too.”

He talks a lot about this kind of stuff in the book. Stuff like this as well. Not much in those parts about what the Romans knew, aside from reiteration and contextualization of stuff covered earlier on. A problem he’s concerned with and presumably one of the factors which motivated him to writing the book is how we might convince ourselves that our models of the world are better than those of the ancients, who also thought they had a pretty good idea about what was going on in the world – he argues this is very difficult. He also talks about Kuhn and stuff like that. As mentioned I don’t want to cover the stuff from the book I don’t like in too much detail here, and I added the quotes in the two paragraphs above mostly because they marginally relate to a point (a few points?) that I felt compelled to include here in the coverage because this stuff is important to me to underscore, on account at least in part of the fact that the author seems to be completely oblivious about it:

Science should in my opinion be full of people making mistakes and getting things wrong. This is not a condition to be avoided, this is a desirable state of affairs.

This is because scientists should be proven wrong when they are wrong. And it is because scientists should risk being proven wrong. Looking for errors, problems, mistakes – this is part of the job description.

The fact that scientists are proven wrong is not a problem, it is a consequence of the fact that scientific discovery is taking place. When scientists find out that they’ve been wrong about something, this is good news. It means we’ve learned something we didn’t know.

This line of thinking seems from my reading of Lehoux to be unfamiliar to him – the desirability of discovering the ways we’re wrong doesn’t really seem to enter the picture. Somehow Lehoux seems to think that the fact that scientists may be proven wrong later on is an argument which should make us feel less secure about our models of the world. I think this is a very wrongheaded way to think about these things, and I’d actually if anything argue the opposite – precisely because our theories might be proven wrong we have reason to feel secure in our convictions, because theories which can be proven wrong contain more relevant information about the world (‘are better’) than theories which can’t, and because theories which might in principle be proven wrong but have not yet been proven wrong despite our best attempts should be placed pretty high up there in the hierarchy of beliefs. We should feel far less secure in our convictions if there were no risk they might be proven wrong.

Without errors being continually identified and mistakes corrected we’re not learning anything new, and science is all about learning new things about the world. Science shouldn’t be thought of as being about building some big fancy building and protecting it against attacks at all costs, walking around hoping we got everything just right and that there’ll be no problems with water in the basement. Philosophers of science and historians of science in my limited experience seem often to subscribe to a model like that, implicitly, presumably in part due to the methodological differences between philosophy and science – they often seem to want to talk about the risk of getting water in the basement. I think it’s much better to not worry too much about that and instead think about science in terms of unsophisticated cavemen walking around with big clubs or hammers, smashing them repeatedly into the walls of the buildings and observing which parts remain standing, in order to figure out which building materials manage the continual assaults best.

Lastly just to reiterate: Despite being occasionally interesting this book is not worth your time.

March 28, 2014 Posted by | Books, History, Philosophy, Science | 5 Comments

Stuff

i. PlosOne: Underestimating Calorie Content When Healthy Foods Are Present: An Averaging Effect or a Reference-Dependent Anchoring Effect?

“Previous studies have shown that estimations of the calorie content of an unhealthy main meal food tend to be lower when the food is shown alongside a healthy item (e.g. fruit or vegetables) than when shown alone. This effect has been called the negative calorie illusion and has been attributed to averaging the unhealthy (vice) and healthy (virtue) foods leading to increased perceived healthiness and reduced calorie estimates. The current study aimed to replicate and extend these findings to test the hypothesized mediating effect of ratings of healthiness of foods on calorie estimates. […] The first two studies failed to replicate the negative calorie illusion. In a final study, the use of a reference food, closely following a procedure from a previously published study, did elicit a negative calorie illusion. No evidence was found for a mediating role of healthiness estimates. […] The negative calorie illusion appears to be a function of the contrast between a food being judged and a reference, supporting the hypothesis that the negative calorie illusion arises from the use of a reference-dependent anchoring and adjustment heuristic and not from an ‘averaging’ effect, as initially proposed. This finding is consistent with existing data on sequential calorie estimates, and highlights a significant impact of the order in which foods are viewed on how foods are evaluated.” […]

The basic idea behind the ‘averaging effect’ above is that your calorie estimate depends on how ‘healthy’ you assume the dish to be; the intuition here is that if you see an apple next to an icecream, you may think of the dish as more healthy than if the apple wasn’t there and that might lead to faulty (faultier) estimates of the actual number of calories in the dish (incidentally presumably such an effect is possible to detect even if people correctly infer that the latter dish has more calories than does the former; what’s of interest here is the estimate error, not the actual estimate). These guys have a hard time finding a negative calorie illusion at all (they don’t in the first two studies), and in the case where they do the mechanism is different from the one initially proposed; it seems to them that the story to be told is a story about anchoring effects. I like when replication attempts get published, especially when they fail to replicate – such studies are important. Here are a few more remarks from the study, about ‘real-world implications’:

“Calorie estimates are a simple measure of participant’s perception of foods; however they almost certainly do not reflect actual factual knowledge about a food’s calorie content. It is not currently known whether calorie estimates are related to the expected satiety for a food, or anticipated tastiness. The data from the current studies fail to show that calorie estimates are derived directly from the healthiness ratings of foods. Other studies have shown that calorie estimates are influenced by the restaurant from which a food is purchased [12], as well as the order in which foods are presented [current study, 11], very much supporting the contextually sensitive nature of calorie estimates. And there is some evidence that erroneous calorie estimates alter portion size selection [13] and that lower calorie estimates for a main meal item have been shown to alter selection for drinks and side dishes [12].

Based on the current data, a negative calorie illusion is unlikely to be driving systematic failures in calorie estimations when incidental “healthy foods”, such as fruit and vegetables, are viewed alongside energy dense nutrition poor foods in advertisements or food labels. Foods would need to be viewed in a pre-determined sequence for systematic errors in real-world instances of calorie estimates. A couple of examples when this might occur are when food items are viewed in a meal with courses (starter, main, dessert) or when foods are seen in a specified order as they are positioned on a food menu or within the pathway around a supermarket from the entrance to the checkout tills.”

ii. A couple of reddit book lists that may be of interest: AskHistorians book list. AskAnthropology reading list. I’ve added a couple of books from those lists to my to-read list.

iii. You can read some pages from Popper’s Conjectures and Refutations here. A few quotes:

“I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory. Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ʺun‐analysedʺ and crying aloud for treatment.

The most characteristic element in this situation seemed to me the incessant stream of confirmations, of observations which ʺverifiedʺ the theories in question; and this point was constantly emphasized by their adherents. A Marxist could not open a newspaper without finding on every page confirming evidence for his interpretation of history; not only in the news, but also in its presentation ‐ which revealed the class bias of the paper ‐ and especially of course in what the paper did not say. The Freudian analysts emphasized that their theories were constantly verified by their ʺclinical observations.” […] I could not think of any human behaviour which could not be interpreted in terms of either theory [Freud or Adler]. It was precisely this fact ‐ that they always fitted, that they were always confirmed ‐ which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness. […]

These considerations led me in the winter of 1919 ‐ 20 to conclusions which I may now reformulate as follows.
(1) It is easy to obtain confirmations, or verifications, for nearly every theory ‐ if we look for confirmations.
(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory ‐ an event which would have refuted the theory.
(3) Every ʺgoodʺ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of theory (as people often think) but a vice.
(5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability; some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
(6) Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ʺcorroborating evidence.ʺ)
(7) Some genuinely testable theories, when found to be false, are still upheld by their admirers ‐ for example by introducing ad hoc some auxiliary assumption, or by re‐interpreting theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ʺconventionalist twistʺ or a ʺconventionalist stratagem.ʺ)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.”

iv. A couple of physics videos:

v. The hidden threat that could prevent Polio’s global eradication.

“Global eradication of polio has been the ultimate game of Whack-a-Mole for the past decade; when it seems the virus has been beaten into submission in a final refuge, up it pops in a new region. Now, as vanquishing polio worldwide appears again within reach, another insidious threat may be in store from infection sources hidden in plain view.

Polio’s latest redoubts are “chronic excreters,” people with compromised immune systems who, having swallowed weakened polioviruses in an oral vaccine as children, generate and shed live viruses from their intestines and upper respiratory tracts for years. Healthy children react to the vaccine by developing antibodies that shut down viral replication, thus gaining immunity to infection. But chronic excreters cannot quite complete that process and instead churn out a steady supply of viruses. The oral vaccine’s weakened viruses can mutate and regain wild polio’s hallmark ability to paralyze the people it infects. After coming into wider awareness in the mid-1990s, the condition shocked researchers. […] Chronic excreters are generally only discovered when they develop polio after years of surreptitiously spreading the virus.”

Wikipedia incidentally has a featured article about Poliomyelitis here.

August 24, 2013 Posted by | Infectious disease, Medicine, Papers, Philosophy, Physics, Psychology, Random stuff, Science | Leave a comment

Mechanism and Causality in Biology and Economics

First, here’s a link. Some quotes below, some comments in the last part of the post:

“I refer to an account of causal order based on Simon’s seminal analysis as the structural account.2 It is structural in the sense that what matters for determining the causal order is the relationship among the parameters and the variables and among the variables themselves. The parameterization – that is, the identification of privileged set of parameters that govern the functional relationships – is the source of the causal asymmetries that define the causal order. The idea of a privilege parameterization can be made more precise, by noting that a set of parameters is privileged when its members are, in the terminology of the econometricians, variation-free. A parameter is variation-free if, and only if, the fact that other parameters take some particular values in their ranges does not restrict the range of admissible values for that parameter.
Defining parameters as variation-free variables has a similar flavor to Hans Reichenbach’s (1956) Principle of the Common Cause: any genuine correlation among variables has a causal explanation – either one causes the other, they are mutual causes, or they have a common cause. Since we represent causal connections as obtaining only between variables simpliciter,we insist that parameters not display any mutual constraints. […] the variation-freeness of parameters is only a representational convention. Any situation in which it appears that putative parameters are mutually constraining can always be rewritten so that the constraints are moved into the functional forms that connect variables to each other.” […]

“John Anderson’s (1938, p. 128) notion of a causal field is helpful (see also Mackie 1980, p. 35; Hoover 2001, pp. 41–49). The causal field consists of background conditions that, for analytical or pragmatic reasons, we would like to set aside in order to focus on some more salient causal system. We are justified in doing so when, in fact, they do not change or when the changes are causally irrelevant. In terms of representation within the structural account, setting aside causes amounts to fixing certain parameters to constant values. The effect is not unlike Pearl’s or Woodward’s wiping out of a causal arrow, though somewhat more delicate. The replacement of a parameter by a constant amounts to absorbing that part of the causal mechanism into the functional form that connects the remaining parameters and variables.” (From chapter 3, ‘Identity, Structure, and Causal Representation in Scientific Models’. This was one of the better chapters).

“Certain substantial idealisations need to be taken also when the RD model [replicator dynamics model, US] is interpreted biologically. A different set of substantial idealisations needs to be taken when the RD model is interpreted socially. By making these different idealisations, we adapt the model for its respective representative uses. This is standard scientific practice: most, and possibly all, model uses involve idealisations. Yet when the same formal structure is employed to construct different, more specific mechanistic models, and each of these models involves different idealisations, one has to be careful when inferring purported similarities between these different mechanisms based on the common formal structure. […] the RD equation is adapted for its respective representative tasks. In the course of each adaptation, certain features of the RD are drawn on – others are accepted as useful or at least harmless idealisations. Which features are drawn on and which are accepted as idealisations differ with each adaptation. The mechanism that each adaptation of the RD represents is substantially different from each other and does not share any or little causal structure between each other.” (From Chapter 5: ‘Models of Mechanisms: The Case of the Replicator Dynamics’).

“Before formulating [a] claim, it is necessary first to clear up some terminology. Leuridan[‘s] definition ignores three traditional distinctions that have brought much-needed clarity to the discussions of laws in the philosophy of science. First, we distinguish laws (metaphysical entities that produce or are responsible for regularities) and law statements (descriptions of laws). If one does not respect this distinction, one runs the risk (as Leuridan does) of unintentionally suggesting that sentences, equations, or models are responsible for the fact that certain stable regularities hold. In like fashion, we distinguish regularities, which are statistical patterns of dependence and independence among magnitudes, from generalizations, which describe regularities. Finally, we distinguish regularities from laws, which produce or otherwise explain the patterns of dependence and independence among magnitudes (or so one might hold). […]

Strict law statements, as Leuridan understands them, are nonvacuous, universally quantified, and exceptionless statements that are unlimited in scope, apply in all times and places, and contain only purely qualitative predicates (2010, p. 318). Noting that few law statements in any science live up to these standards, Leuridan argues that the focus on strict law statements (and presumably also on strict laws) is unhelpful for understanding science. Instead, he focuses on the concept of a pragmatic law (or p-law). Following Sandra Mitchell (1997, 2000, 2003, 2009), Leuridan understands p-law statements as descriptions of stable and strong regularities that can be used to predict, explain, and manipulate phenomena. A regularity is stable in proportion to the range of conditions under which it continues to hold and to the size of the space-time region in which it holds (2010, p. 325). A regularity is strong if it is deterministic or frequent. p-law statements need not satisfy the criteria for strict law statements.” (From chapter 7: ‘Mechanisms and Laws: Clarifying the Debate’)

“This section has illustrated two central points concerning extrapolation. First, it is not necessary that the causal relationship to be extrapolated is the same in the model as in the target. Given knowledge of the probability distributions for the model and target along with the selection diagram, it can be possible to make adjustments to account for differences. Secondly, the conditions needed for extrapolation vary with the type of claim to extrapolated. In general, the more informative the causal claim, the more stringent the background assumptions needed to justify its transfer. This second point is very important for explaining how extrapolation can remain possible even when substantial uncertainty exists about the selection diagram. […]

I should emphasize that the point here is definitely not to insist upon the infirmity of causal inferences grounded in extrapolation and observational data. Uncertainties frequently arise in experiments too, especially those involving human subjects (for instance, due to noncompliance, i.e., the failure of some subjects in the experiment to follow the experimental protocol). Such uncertainties are inherent in any attempts to learn about causation in large complex systems wherein numerous practical and ethical concerns restrict the types of studies that are possible. Consequently, scientific inference in such situations usually must build a cumulative case from a variety of lines of evidence none of which is decisive in isolation. Although that may seem a rather obvious point, it does seem to get overlooked in some critical discussions of extrapolation. […] critiques which observe that extrapolations rarely if ever constitute definitive evidence sail wide of the mark. Building a case based on the coherence of multiple lines of imperfect evidence is the norm for social science and other sciences that study complex systems that are widely diffused across space and time. To insist otherwise is to misconstrue the nature of science and to obstruct applications of scientific knowledge to many pressing real-world problems.” (From chapter 10: ‘Mechanisms and Extrapolation in the Abortion-Crime Controversy’.)

“In 1992, Heckman published a seminal paper containing ‘most of the standard objections’ against randomised experiments in the social sciences. Heckman focused on the non-comparative evaluation of social policy programmes, where randomisation simply decided who would join them (without allocating the rest to a control group). Heckman claimed that even if randomisation allows the experimenters to reduce selection biases, it may produce a different bias. Specifically, experimental subjects might behave differently if joining the programme did not require ‘a lottery’. Randomisation can thus interfere with the decision patterns (the causes of action) presupposed in the programme under evaluation. […] Heckman’s main objection is that randomisation tends to eliminate risk-averse persons. This is only acceptable if risk aversion is an irrelevant trait for the outcome under investigation […] However, even if irrelevant, it compels experimenters to deal with bigger pools of potential participants in order to meet the desired sample size, so the exclusion of risk-averse subjects does not disrupt recruitment. But bigger pools may affect in turn the quality of the experiment, if it implies higher costs. One way or another, argues Heckman, randomisation is not neutral regarding the results of the experiment.” (…known stuff, but I figured I should quote it anyway as it’s unlikely that all readers are familiar with this problem. From chapter 11: ‘Causality, Impartiality and Evidence-Based Policy’. How to deal with the problem? Here’s what they conclude:)

“To sum up, in RFEs [Randomized Field Experiments – US], randomisation may generate a self-selection bias; we can only avoid with a partial or total masking of the allocation procedure. We have argued that this is a viable solution only insofar as the trial participants do not have strong preferences about the trial outcome. If they do, we cannot assume that blinded randomisation will be a control for their preferences unless we test for its success. We will only be able to claim that the trial has been impartial regarding the participants’ preferences if we have a positive proof of them being ignorant of the comparative nature of the experiment. Hence, in RFEs, randomisation is not a strong warrant of impartiality per se: we need to prove in addition that it has been masked successfully.1”

On a general note, I found some of the stuff in this book interesting, but there was some confusing stuff in there as well. I had at least some background knowledge about quite a few of the subjects covered, but a lot of the stuff in the book is written by people with a completely different background (many of the contributors are philosophers of science), and in some chapters I had a hard time ‘translating’ a specific contributor’s (gibberish? It’s not a nice word to use, but I’m tempted to use it here anyway) into stuff related to the science/the real world – I was quite close to walking away from the book while reading chapters 8 and 9, dealing with natural selection and causal processes. I didn’t, but you should most certainly not pick up this book in order to figure out how natural selection ‘actually works’; if that’s your goal, read Dawkins instead. A few times I had an ‘I knew this, but that’s actually an interesting way to think about it’-experience and I generally like having those. As in all books with multiple contributors, there’s some variation in the quality of the material across chapters – and as you might infer from the comments above, I didn’t think very highly of chapters 8 and 9. But there were other chapters as well which also did not really interest me much. I did read it all though.

Overall I’m a little disappointed, but it’s not all bad. I gave it 2 stars on goodreads, and towards the end I moved significantly closer to the 3 star rating than the one star rating. I wouldn’t recommend it though; considering how much you’re likely to get out of this, it’s probably for most people simply too much work – it’s not an easy book to read.

August 19, 2013 Posted by | Books, Economics, Philosophy, Science, Statistics | Leave a comment

Stuff

i. I’ve read The Murder of Roger Ackroyd. I’ll say very little about the book here because I don’t want to spoil it in any way – but I do want to say that the book is awesome. I read it in one sitting, and I gave it 5 stars on goodreads (av.: 4,09); I think it’s safe to say it’s one of the best crime novels I’ve ever read (and I’ll remind you again that even though I haven’t read that much crime fiction, I have read some – e.g. every Sherlock Holmes story ever published and every inspector Morse novel written by Colin Dexter). The cleverness of the plot reminded me of a few Asimov novels I read a long time ago. A short while after I’d finished the book I was in the laundry room about to start the washing machine and a big smile spread on my face, I was actually close to laughing – because damn, the book is just so clever, so brilliant!

I highly recommend the book.

ii. I have been watching a few of the videos in the Introduction to Higher Mathematics youtube-series by Bill Shillito, here are a couple of examples:

I’m not super impressed by these videos at this point, but I figured I might as well link to them anyway. There are 19 videos in the playlist.

iii. Mind the Gap: Disparity Between Research Funding and Costs of Care for Diabetic Foot Ulcers. A brief comment from this month’s issue of Diabetes Care. The main point:

“Diabetic foot ulceration (DFU) is a serious and prevalent complication of diabetes, ultimately affecting some 25% of those living with the disease (1). DFUs have a consistently negative impact on quality of life and productivity […]  Patients with DFUs also have morbidity and mortality rates equivalent to aggressive forms of cancer (2). These ulcers remain an important risk factor for lower-extremity amputation as up to 85% of amputations are preceded by foot ulcers (6). It should therefore come as no surprise that some 33% of the $116 billion in direct costs generated by the treatment of diabetes and its complications was linked to the treatment of foot ulcers (7). Another study has suggested that 25–50% of the costs related to inpatient diabetes care may be directly related to DFUs (2). […] The cost of care of people with diabetic foot ulcers is 5.4 times higher in the year after the first ulcer episode than the cost of care of people with diabetes without foot ulcers (10). […]

We identified 22,531 NIH-funded projects in diabetes between 2002–2011. Remarkably, of these, only 33 (0.15%) were specific to DFUs. Likewise, these 22,531 NIH-funded projects yielded $7,161,363,871 in overall diabetes funding, and of this, only $11,851,468 (0.17%) was specific to DFUs. Thus, a 604-fold difference exists between overall diabetes funding and that allocated to DFUs. […] As DFUs are prevalent and have a negative impact on the quality of life of patients with diabetes, it would stand to reason that U.S. federal funding specifically for DFUs would be proportionate with this burden. Unfortunately, this yawning gap in funding (and commensurate development of a culture of sub-specialty research) stands in stark contrast to the outsized influence of DFUs on resource utilization within diabetes care. This disparity does not appear to be isolated to [the US].”

I’ve read about diabetic foot care before, but I had no idea about this stuff. Of the roughly 175.000 peer-reviewed publications about diabetes published in the period of 2000-2009, only 1200 of them – 0.69% – were about the diabetic foot. You can quibble over the cost estimates and argue that perhaps they’ve overstated because these guys want more money, but I think that it’s highly unlikely that the uncertainties related to the cost estimates are so big as to somehow make the current (research) ressource allocation scheme appear cost efficient in a CBA with reasonable assumptions – there simply has to be some low-hanging fruit here.

A slightly related (if you stretch the definition of ‘related’ a little) article which I also found interesting here.

iv. “How quickly would the ocean’s drain if a circular portal 10 meters in radius leading into space was created at the bottom of Challenger Deep, the deepest spot in the ocean? How would the Earth change as the water is being drained?”

And, “Supposing you did Drain the Oceans, and dumped the water on top of the Curiosity rover, how would Mars change as the water accumulated?”

And now you know.

v. Take news of cancer ‘breakthrough’ with a big grain of salt. I’d have added the word ‘any’ and probably an ‘s’ to the word breakthrough as well if I’d authored the headline, in order to make a more general point – but be that as it may… The main thrust:

“scientific breakthroughs should not be announced at press conferences using the vocabulary of public relations professionals.

The language of science and medicine should be cautious and humble because diseases like cancer are relentless and humbling. […]

The reality is that biomedical research is a slow process that yields small incremental results. If there is a lesson to retain from the tale of CFI-400945, it’s that finding new treatments takes a lot of time and a lot of money. It is a venture worthy of support, but unworthy of exaggerated expectations and casual overstatement.

Hype only serves to create false hope.”

People who’re not familiar with how science actually works (and how related processes such as drug development work) often have weird ideas about how fast things tend to proceed and how (/un?)likely a ‘promising’ result in the lab might be to be translated into, say, a new treatment option available to the general patient population. And yeah, that set of ‘people who’re not familiar with how science works’ would include almost everybody.

It should be noted, as I’m sure Picard knows, that it’s a lot easier to get funding for your project if you’re exaggerating benefits and downplaying costs; if you’re too optimistic; if you’re saying nice things about the guy writing the checks even though you think he’s an asshole; etc. Some types of dishonesty are probably best perceived of as nothing more than ‘good salesmanship’ whereas other types might have different interpretations; but either way it’d be silly to pretend that stuff like false hope does not sell a lot of tickets (and newspapers, and diluted soap water, and…). Given that, it’s hardly likely that things will change much anytime soon – the demand for information here is much higher than is the demand for accurate information. But it’s nice to read an article like this one every now and then anyway.

vi. “There aren’t enough small numbers to meet the many demands made of them.” Short description here, link to the paper here.

July 17, 2013 Posted by | Books, Diabetes, Journalism, Lectures, Mathematics, Papers, Science | 11 Comments

Wikipedia articles of interest

i. Victorian era. This is a fascinating article, with lots of stuff:

“The Victorian era of British history was the period of Queen Victoria‘s reign from 20 June 1837 until her death on 22 January 1901. It was a long period of peace, prosperity, refined sensibilities and national self-confidence for Britain.[citation needed] Some scholars date the beginning of the period in terms of sensibilities and political concerns to the passage of the Reform Act 1832.

The era was preceded by the Georgian period and followed by the Edwardian period. The latter half of the Victorian age roughly coincided with the first portion of the Belle Époque era of continental Europe and the Gilded Age of the United States.

Culturally there was a transition away from the rationalism of the Georgian period and toward romanticism and mysticism with regard to religion, social values, and the arts.[1] In international relations the era was a long period of peace, known as the Pax Britannica, and economic, colonial, and industrial consolidation, temporarily disrupted by the Crimean War in 1854. The end of the period saw the Boer War. Domestically, the agenda was increasingly liberal with a number of shifts in the direction of gradual political reform, industrial reform and the widening of the voting franchise. […]

The population of England almost doubled from 16.8 million in 1851 to 30.5 million in 1901.[2] Scotland’s population also rose rapidly, from 2.8 million in 1851 to 4.4 million in 1901. Ireland’s population decreased rapidly, from 8.2 million in 1841 to less than 4.5 million in 1901, mostly due to the Great Famine.[3] At the same time, around 15 million emigrants left the United Kingdom in the Victorian era and settled mostly in the United States, Canada, and Australia.[4] […]

The mortality rates in England changed greatly through the 19th century. There was no catastrophic epidemic or famine in England or Scotland in the 19th century – it was the first century in which a major epidemic did not occur throughout the whole country, with deaths per 1000 of population per year in England and Wales dropping from 21.9 from 1848–54 to 17 in 1901 (contrasting with, for instance, 5.4 in 1971).[5] […]

The Victorian era became notorious for the employment of young children in factories and mines and as chimney sweeps.[27] Child labour, often brought about by economic hardship, played an important role in the Industrial Revolution from its outset: Charles Dickens, for example, worked at the age of 12 in a blacking factory, with his family in a debtors’ prison. In 1840 only about 20 percent of the children in London had any schooling. By 1860 about half of the children between 5 and 15 were in school (including Sunday school).[28]

The children of the poor were expected to help towards the family budget, often working long hours in dangerous jobs for low wages.[25] Agile boys were employed by the chimney sweeps; small children were employed to scramble under machinery to retrieve cotton bobbins; and children were also employed to work in coal mines, crawling through tunnels too narrow and low for adults. Children also worked as errand boys, crossing sweepers, shoe blacks, or sold matches, flowers, and other cheap goods.[25] Some children undertook work as apprentices to respectable trades, such as building, or as domestic servants (there were over 120,000 domestic servants in London in the mid 18th century). Working hours were long: builders might work 64 hours a week in summer and 52 in winter, while domestic servants worked 80 hour weeks. Many young people worked as prostitutes (the majority of prostitutes in London were between 15 and 22 years of age).[28] […]

Children as young as four were put to work. In coal mines children began work at the age of 5 and generally died before the age of 25. Many children (and adults) worked 16 hour days. As early as 1802 and 1819, Factory Acts were passed to limit the working hours of workhouse children in factories and cotton mills to 12 hours per day. These acts were largely ineffective […]

Beginning in the late 1840s, major news organisations, clergymen, and single women became increasingly concerned about prostitution, which came to be known as “The Great Social Evil”. Estimates of the number of prostitutes in London in the 1850s vary widely (in his landmark study, Prostitution, William Acton reported that the police estimated there were 8,600 in London alone in 1857). When the United Kingdom Census 1851 publicly revealed a 4% demographic imbalance in favour of women (i.e., 4% more women than men), the problem of prostitution began to shift from a moral/religious cause to a socio-economic one. The 1851 census showed that the population of Great Britain was roughly 18 million; this meant that roughly 750,000 women would remain unmarried simply because there were not enough men. These women came to be referred to as “superfluous women” or “redundant women”, and many essays were published discussing what, precisely, ought to be done with them.[29] […] Divorce legislation introduced in 1857 allowed for a man to divorce his wife for adultery, but a woman could only divorce if adultery were accompanied by cruelty. The anonymity of the city led to a large increase in prostitution and unsanctioned sexual relationships.”

An image from the article, displaying “working class life in Victorian Wetherby, West Yorkshire”:

Victorian_Bishopgate

ii. Landlocked country.

“A landlocked country is a country entirely enclosed by land, or whose only coastlines lie on closed seas.[1][2][3][4] There are 48 landlocked countries in the world, including partially recognized states. No landlocked countries are found on North American, Australian, and inhospitable Antarctic continents. The general economic and other disadvantages experienced by landlocked countries makes the majority of these countries Landlocked Developing Countries (LLDCs).[5] Nine of the twelve countries with the lowest HDI scores are landlocked.[6] […] Historically, being landlocked was regarded as a disadvantageous position. It cuts the country off from sea resources such as fishing, but more importantly cuts off direct access to seaborne trade which makes up a large percentage of international trade. Coastal regions tended to be wealthier and more heavily populated than inland ones. […] Landlocked developing countries have significantly higher costs of international cargo transportation compared to coastal developing countries (in Asia the ratio is 3:1).[10]

Landlocked countries make out 11,4% of the total land area of Earth, and the countries make out an estimated 6,9% of the world population.

“A landlocked country surrounded only by other landlocked countries may be called a “doubly landlocked” country. A person in such a country has to cross at least two borders to reach a coastline.

There are currently two such countries in the world:

There were no doubly landlocked countries in the world from the Unification of Germany in 1871 until the end of World War I.”

iii. 1842 retreat from Kabul/Massacre of Elphinstone’s Army.

“The 1842 Kabul Retreat (or Massacre of Elphinstone’s Army) was the entire loss of a combined force of British and Indian troops from the British East India Company and the deaths of thousands of civilians in Afghanistan between 6-13 January 1842. The massacre, which happened during the First Anglo-Afghan War, occurred when Major General Sir William Elphinstone attempted to lead a military and civilian column of Europeans and Indians from Kabul back to the British garrison at Jalalabad more than 90 miles (140 km) away. They were forced to leave because of an uprising led by Akbar Khan, the son of the deposed Afghan leader, Dost Mohammad Khan.

Afghan tribes launched numerous attacks against the column as it made slow progress through the winter snows of the Hindu Kush. In total the India Company army lost 4,500 troops, along with 12,000 civilian workers, family members and other camp-followers. The final stand was made just outside a village called Gandamak on 13 January.[2]

Out of more than 16,000 people from the column commanded by Elphinstone, only one European, an Assistant Surgeon named William Brydon, and a few sepoys would eventually reach Jalalabad. The Afghanis subsequently released a number of British prisoners and civilian hostages. However many Indians were not handed back and were instead sold into slavery or killed.

The retreat has been described as “the worst British military disaster until the fall of Singapore exactly a century later.[3] […]

Sir Willoughby Cotton was replaced as commander of the remaining British troops by the ageing and infirm Sir William Elphinstone. The 59-year-old Major General, who was initially unwilling to accept the appointment, had entered the British army in 1804. He was made a Companion of the Bath for leading the 33rd Regiment of Foot at the Battle of Waterloo. By 1825 he had been promoted to colonel and made a major-general in 1837. Although Elphinstone was a man of high birth and perfect manners, his colleague and contemporary General William Nott regarded him as “the most incompetent soldier who ever became general”. […]

Throughout the third day, the column laboured through the pass. Once the main body had moved through, the Afghans left their positions to massacre the stragglers and the wounded. By the evening of 9 January, the column had only moved 25 miles (40 km) but already 3,000 people had died. Most had been killed in the fighting, but some had frozen to death or even taken their own lives.

By the fourth day, a few hundred soldiers deserted and tried to return to Kabul but they were all killed. By now Elphinstone, who had ceased giving orders, sat silently on his horse. On the evening of 11 January, Lady Sale, along with the wives and children of both British and Indian officers, and their retinues, accepted Akbar Khan’s assurances of protection. Despite deep mistrust, the group was taken into the custody of Akbar’s men. However once they were hostages, all the Indian servants and sepoy wives were murdered. Akbar Khan’s envoys then returned and persuaded Elphinstone and his second in command, Brigadier Shelton, to become hostages, too. Both senior officers agreed to surrender, abandoning their men to their fate. Elphinstone died on 23 April as a captive. […] On 13 January, a British officer from the 16,000 strong column rode into Jalalabad on a wounded horse (a few sepoys, who had hidden in the mountains, followed in the coming weeks). The sole survivor of the 12-man cavalry group, assistant Surgeon William Brydon, was asked upon arrival what happened to the army, to which he answered “I am the army”. Although part of his skull had been sheared off by a sword, he ultimately survived because he had insulated his hat with a magazine which deflected the blow. […]

The annihilation of about 16,500 people left Britain and India in shock and the Governor General, Lord Auckland, suffered a stroke upon hearing the news. In the Autumn of 1842 an “Army of Retribution” led by Sir George Pollock, with William Nott and Robert Sale commanding divisions, levelled Kabul. Sale personally rescued his wife Lady Sale and some other hostages from the hands of Akbar Khan. However, the slaughter of an army by Afghan tribesmen was humiliating for the British authorities in India.

Of the British prisoners, 32 officers, over 50 soldiers, 21 children and 12 women survived to be released in September 1842. An unknown number of sepoys and other Indian prisoners were sold into slavery in Kabul or kept as captives in mountain villages.[9] […]

The leadership of Elphinstone is seen as a notorious example of how the ineptitude and indecisiveness of a senior officer could compromise the morale and effectiveness of a whole army (though already much depleted). Elphinstone completely failed to lead his soldiers, but fatally exerted enough authority to prevent any of his officers from exercising proper command in his place.”

iv. Alcatraz Federal Penitentiary. (‘good article’)

800px-Alcatraz_Cellhouse

“The Alcatraz Federal Penitentiary or United States Penitentiary, Alcatraz Island (often just referred to as Alcatraz) was a maximum high-security Federal prison on Alcatraz Island, 1.25 miles (2.01 km) off the coast of San Francisco, California, USA, which operated from 1934 to 1963. […]

Alcatraz was designed to hold prisoners who continuously caused trouble at other federal prisons. One of the world’s most notorious, and best known prisons over the years, Alcatraz housed some 1576 of America’s most ruthless criminals […] Faced with high running maintenance costs and a poor reputation, Alcatraz closed on March 21, 1963. […]

The prison cells typically measured 9 feet (2.7 m) by 5 feet (1.5 m) and 7 feet (2.1 m) high. The cells were primitive and lacked privacy, with a bed, a desk and a washbasin and toilet on the back wall and few furnishings except a blanket. Black people were segregated from the rest in cell designation due to racial abuse being prevalent. […]

By the 1950s, the prison conditions had improved and prisoners were gradually permitted more privileges such as the playing of musical instruments, watching movies at weekends, painting, and radio use; the strict code of silence became more relaxed and prisoners were permitted to talk quietly.[17] However, the prison continued to be unpopular on the mainland into the 1950s; it was by far the most expensive prison institution in the United States and continued to be perceived by many as America’s most extreme jail.[19][10] […] A 1959 report indicated that Alcatraz was more than three times more expensive to run than the average US prison; $10 per prisoner per day compared to $3 in most others prisons.[20] The problem of Alcatraz was exacerbated by the fact that the prison had seriously deteriorated structurally in exposure to the salt air and wind and would need $5 million to deal with it. Major repairs began in 1958 but by 1961 the prison was evaluated by engineers to be a lost cause and Robert F. Kennedy submitted plans for a new maximum-security institution at Marion, Illinois.[10] After the escape from Alcatraz in June 1962, the prison was the subject of heated investigations, and with the major structural problems and ongoing expense, the prison finally closed on 21 March 1963.[20] […] Today the penitentiary is a museum and one of San Francisco’s major tourist attractions, attracting some 1.5 million visitors annually.[21][22] […]

Security in the prison was very tight, with the constant checking of bars, doors, locks, electrical fixtures etc., to ensure that security hadn’t been broken.[40] During a standard day the prisoners would be counted 13 times, and the ratio of prisoners to guards was the lowest of any American prison of the time.[41][42] […]

The library, which utilized a closed-stack paging system, had a collection of 10,000 to 15,000 books […] The average prisoner read 75 to 100 books a year.[66]

v. Pathological science.

Pathological science is the process by which “people are tricked into false results … by subjective effects, wishful thinking or threshold interactions”.[1][2] The term was first used by Irving Langmuir, Nobel Prize-winning chemist, during a 1953 colloquium at the Knolls Research Laboratory. Langmuir said a pathological science is an area of research that simply will not “go away”—long after it was given up on as ‘false’ by the majority of scientists in the field. He called pathological science “the science of things that aren’t so”.[3]

Bart Simon lists it among practices pretending to be science: “categories [.. such as ..] pseudoscience, amateur science, deviant or fraudulent science, bad science, junk science, and popular science [..] pathological science, cargo-cult science, and voodoo science ..”.[4] Examples of pathological science may include homeopathy, Martian canals, N-rays, polywater, water memory, perpetual motion, and cold fusion. The theories and conclusions behind all of these examples are currently rejected or disregarded by the majority of scientists. […]

Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to the scientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation (see the Observer-expectancy effect, and cognitive bias). Some characteristics of pathological science are:

  • The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
  • The effect is of a magnitude that remains close to the limit of detectability, or many measurements are necessary because of the very low statistical significance of the results.
  • There are claims of great accuracy.
  • Fantastic theories contrary to experience are suggested.
  • Criticisms are met by ad hoc excuses.
  • The ratio of supporters to critics rises and then falls gradually to oblivion.

Langmuir never intended the term to be rigorously defined; it was simply the title of his talk on some examples of “weird science”.”

vi. Upwelling.

Upwelling is an oceanographic phenomenon that involves wind-driven motion of dense, cooler, and usually nutrient-rich water towards the ocean surface, replacing the warmer, usually nutrient-depleted surface water. The nutrient-rich upwelled water stimulates the growth and reproduction of primary producers such as phytoplankton. Due to the biomass of phytoplankton and presence of cool water in these regions, upwelling zones can be identified by cool sea surface temperatures (SST) and high concentrations of chlorophyll-a.[1][2]

The increased availability in upwelling regions results in high levels of primary productivity and thus fishery production. Approximately 25% of the total global marine fish catches come from five upwellings that occupy only 5% of the total ocean area.[3]

Upwelling_image1(“Areas of coastal upwelling in red.”)

April 7, 2013 Posted by | Geography, Geology, History, Psychology, Science, Wikipedia | Leave a comment

A summary of scientific method

By Peter Kosso.

This book is crap, stay away from it. It’s very short, which was the only reason why I actually read it cover to cover. Kosso neglects some very important points you’d want to see in a publication like this; on the list of recommended reading he includes Kuhn but not Popper, and Popper’s name isn’t even mentioned. Presumably because he disagrees with Popper about the importance of falsification. Conceptually he doesn’t talk about and doesn’t seem to understand how crucial is the requirement in science that you restrict the (potential) outcome space when forming hypotheses. He picks out history and archaeology as examples of ‘social sciences’; maybe because that’s the closest he’s ever been to the social sciences? He talks about how experimental designs can play a role here, but doesn’t include a single word about the role of statistics in scientific disciplines.

I’d probably give it 1 out of 5 on amazon. He reads as if he doesn’t have a clue. The only good thing about the book is that it is quite short.

 

March 24, 2013 Posted by | Books, Philosophy, Science | 2 Comments

Richard Feynman: Fun to Imagine | Using physics to explain how the world works

This is great stuff:

February 24, 2013 Posted by | Physics, Science | Leave a comment

Scientific knowledge across countries

I’m sure I’ve seen some of this stuff before (Razib Khan may have covered it), but I’m pretty sure I have not blogged it. Link to the source here, click to view the figures/tables in full size:

Of course formal education matters, a lot:

Interestingly, the link also has data related to a recent post:

It seems that public opinion doesn’t change very much over time. I thought this last one was interesting (if anyone knows of any related Danish data, let me know in the comment section):

Note that this is only the “very great prestige”-proportion, so there may be stuff going on we don’t know about. Note how much both ‘teacher’ and ‘military officer’ has changed over time. Something funny may be going on here; ‘farmer’ is more prestigious than ‘Member of Congress’ and ‘Lawyer’ (‘well of course it is,’ you might say, but…).

July 8, 2012 Posted by | Agnotology, Data, Science | Leave a comment