Econstudentlog

The pleasure of finding things out (II)

Here’s my first post about the book. In this post I have included a few more quotes from the last half of the book.

“Are physical theories going to keep getting more abstract and mathematical? Could there be today a theorist like Faraday in the early nineteenth century, not mathematically sophisticated but with a very powerful intuition about physics?
Feynman: I’d say the odds are strongly against it. For one thing, you need the math just to understand what’s been done so far. Beyond that, the behavior of subnuclear systems is so strange compared to the ones the brain evolved to deal with that the analysis has to be very abstract: To understand ice, you have to understand things that are themselves very unlike ice. Faraday’s models were mechanical – springs and wires and tense bands in space – and his images were from basic geometry. I think we’ve understood all we can from that point of view; what we’ve found in this century is different enough, obscure enough, that further progress will require a lot of math.”

“There’s a tendency to pomposity in all this, to make it all deep and profound. My son is taking a course in philosophy, and last night we were looking at something by Spinoza – and there was the most childish reasoning! There were all these Attributes, and Substances, all this meaningless chewing around, and we started to laugh. Now, how could we do that? Here’s this great Dutch philosopher, and we’re laughing at him. It’s because there was no excuse for it! In that same period there was Newton, there was Harvey studying the circulation of the blood, there were people with methods of analysis by which progress was being made! You can take every one of Spinoza’s propositions, and take the contrary propositions, and look at the world – and you can’t tell which is right. Sure, people were awed because he had the courage to take on these great questions, but it doesn’t do any good to have the courage if you can’t get anywhere with the question. […] It isn’t the philosophy that gets me, it’s the pomposity. If they’d just laugh at themselves! If they’d just say, “I think it’s like this, but von Leipzig thought it was like that, and he had a good shot at it, too.” If they’d explain that this is their best guess … But so few of them do”.

“The lesson you learn as you grow older in physics is that what we can do is a very small fraction of what there is. Our theories are really very limited.”

“The first principle is that you must not fool yourself – and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”

“When I was an undergraduate I worked with Professor Wheeler* as a research assistant, and we had worked out together a new theory about how light worked, how the interaction between atoms in different places worked; and it was at that time an apparently interesting theory. So Professor Wigner†, who was in charge of the seminars there [at Princeton], suggested that we give a seminar on it, and Professor Wheeler said that since I was a young man and hadn’t given seminars before, it would be a good opportunity to learn how to do it. So this was the first technical talk that I ever gave. I started to prepare the thing. Then Wigner came to me and said that he thought the work was important enough that he’d made special invitations to the seminar to Professor Pauli, who was a great professor of physics visiting from Zurich; to Professor von Neumann, the world’s greatest mathematician; to Henry Norris Russell, the famous astronomer; and to Albert Einstein, who was living near there. I must have turned absolutely white or something because he said to me, “Now don’t get nervous about it, don’t be worried about it. First of all, if Professor Russell falls asleep, don’t feel bad, because he always falls asleep at lectures. When Professor Pauli nods as you go along, don’t feel good, because he always nods, he has palsy,” and so on. That kind of calmed me down a bit”.

“Well, for the problem of understanding the hadrons and the muons and so on, I can see at the present time no practical applications at all, or virtually none. In the past many people have said that they could see no applications and then later they found applications. Many people would promise under those circumstances that something’s bound to be useful. However, to be honest – I mean he looks foolish; saying there will never be anything useful is obviously a foolish thing to do. So I’m going to be foolish and say these damn things will never have any application, as far as I can tell. I’m too dumb to see it. All right? So why do you do it? Applications aren’t the only thing in the world. It’s interesting in understanding what the world is made of. It’s the same interest, the curiosity of man that makes him build telescopes. What is the use of discovering the age of the universe? Or what are these quasars that are exploding at long distances? I mean what’s the use of all that astronomy? There isn’t any. Nonetheless, it’s interesting. So it’s the same kind of exploration of our world that I’m following and it’s curiosity that I’m satisfying. If human curiosity represents a need, the attempt to satisfy curiosity, then this is practical in the sense that it is that. That’s the way I would look at it at the present time. I would not put out any promise that it would be practical in some economic sense.”

“To science we also bring, besides the experiment, a tremendous amount of human intellectual attempt at generalization. So it’s not merely a collection of all those things which just happen to be true in experiments. It’s not just a collection of facts […] all the principles must be as wide as possible, must be as general as possible, and still be in complete accord with experiment, that’s the challenge. […] Evey one of the concepts of science is on a scale graduated somewhere between, but at neither end of, absolute falsity or absolute truth. It is necessary, I believe, to accept this idea, not only for science, but also for other things; it is of great value to acknowledge ignorance. It is a fact that when we make decisions in our life, we don’t necessarily know that we are making them correctly; we only think that we are doing the best we can – and that is what we should do.”

“In this age of specialization, men who thoroughly know one field are often incompetent to discuss another.”

“I believe that moral questions are outside of the scientific realm. […] The typical human problem, and one whose answer religion aims to supply, is always of the following form: Should I do this? Should we do this? […] To answer this question we can resolve it into two parts: First – If I do this, what will happen? – and second – Do I want that to happen? What would come of it of value – of good? Now a question of the form: If I do this, what will happen? is strictly scientific. […] The technique of it, fundamentally, is: Try it and see. Then you put together a large amount of information from such experiences. All scientists will agree that a question – any question, philosophical or other – which cannot be put into the form that can be tested by experiment (or, in simple terms, that cannot be put into the form: If I do this, what will happen?) is not a scientific question; it is outside the realm of science.”

Advertisements

June 26, 2019 Posted by | Astronomy, Mathematics, Philosophy, Physics, Quotes/aphorisms, Science | Leave a comment

The pleasure of finding things out (I?)

As I put it in my goodreads review of the book, “I felt in good company while reading this book“. Some of the ideas in the book are by now well known, for example some of the interview snippets also included in the book have been added to youtube and have been viewed by hundreds of thousands of people (I added a couple of them to my ‘about’ page some years ago, and they’re still there, these are enjoyable videos to watch and they have aged well!) (the overlap between the book’s text and the sound recordings available is not 100 % for this material, but it’s close enough that I assume these were the same interviews). Others ideas and pieces I would assume to be less well known, for example Feynman’s encounter with Uri Geller in the latter’s hotel room, where he was investigating the latter’s supposed abilities related to mind reading and key bending..

I have added some sample quotes from the book below. It’s a good book, recommended.

“My interest in science is to simply find out about the world, and the more I find out the better it is, like, to find out. […] You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about […] I don’t have to know an answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn’t frighten me.”

“Some people look at the activity of the brain in action and see that in many respects it surpasses the computer of today, and in many other respects the computer surpasses ourselves. This inspires people to design machines that can do more. What often happens is that an engineer has an idea of how the brain works (in his opinion) and then designs a machine that behaves that way. This new machine may in fact work very well. But, I must warn you that that does not tell us anything about how the brain actually works, nor is it necessary to ever really know that, in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how the feathers are designed in order to make a flying machine. It is not necessary to understand the lever system in the legs of a cheetah – an animal that runs fast – in order to make an automobile with wheels that goes very fast. It is therefore not necessary to imitate the behavior of Nature in detail in order to engineer a device which can in many respects surpass Nature’s abilities.”

“These ideas and techniques [of scientific investigation] , of course, you all know. I’ll just review them […] The first is the matter of judging evidence – well, the first thing really is, before you begin you must not know the answer. So you begin by being uncertain as to what the answer is. This is very, very important […] The question of doubt and uncertainty is what is necessary to begin; for if you already know the answer there is no need to gather any evidence about it. […] We absolutely must leave room for doubt or there is no progress and there is no learning. There is no learning without having to pose a question. And a question requires doubt. […] Authority may be a hint as to what the truth is, but it is not the source of information. As long as it’s possible, we should disregard authority whenever the observations disagree with it. […] Science is the belief in the ignorance of experts.”

“If we look away from the science and look at the world around us, we find out something rather pitiful: that the environment that we live in is so actively, intensely unscientific. Galileo could say: “I noticed that Jupiter was a ball with moons and not a god in the sky. Tell me, what happened to the astrologers?” Well, they print their results in the newspapers, in the United States at least, in every daily paper every day. Why do we still have astrologers? […] There is always some crazy stuff. There is an infinite amount of crazy stuff, […] the environment is actively, intensely unscientific. There is talk about telepathy still, although it’s dying out. There is faith-healing galore, all over. There is a whole religion of faith-healing. There’s a miracle at Lourdes where healing goes on. Now, it might be true that astrology is right. It might be true that if you go to the dentist on the day that Mars is at right angles to Venus, that it is better than if you go on a different day. It might be true that you can be cured by the miracle of Lourdes. But if it is true it ought to be investigated. Why? To improve it. If it is true then maybe we can find out if the stars do influence life; that we could make the system more powerful by investigating statistically, scientifically judging the evidence objectively, more carefully. If the healing process works at Lourdes, the question is how far from the site of the miracle can the person, who is ill, stand? Have they in fact made a mistake and the back row is really not working? Or is it working so well that there is plenty of room for more people to be arranged near the place of the miracle? Or is it possible, as it is with the saints which have recently been created in the United States–there is a saint who cured leukemia apparently indirectly – that ribbons that are touched to the sheet of the sick person (the ribbon having previously touched some relic of the saint) increase the cure of leukemia–the question is, is it gradually being diluted? You may laugh, but if you believe in the truth of the healing, then you are responsible to investigate it, to improve its efficiency and to make it satisfactory instead of cheating. For example, it may turn out that after a hundred touches it doesn’t work anymore. Now it’s also possible that the results of this investigation have other consequences, namely, that nothing is there.”

“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy – and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter.”

“If we want to solve a problem that we have never solved before, we must leave the door to the unknown ajar.”

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

“I would like to say a word or two […] about words and definitions, because it is necessary to learn the words. It is not science. That doesn’t mean just because it is not science that we don’t have to teach the words. We are not talking about what to teach; we are talking about what science is. It is not science to know how to change centigrade to Fahrenheit. It’s necessary, but it is not exactly science. […] I finally figured out a way to test whether you have taught an idea or you have only taught a definition. Test it this way: You say, “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language.”

“My father dealt a little bit with energy and used the term after I got a little bit of the idea about it. […] He would say, “It [a toy dog] moves because the sun is shining,” […]. I would say “No. What has that to do with the sun shining? It moved because I wound up the springs.” “And why, my friend, are you able to move to wind up this spring?” “I eat.” “What, my friend, do you eat?” “I eat plants.” “And how do they grow?” “They grow because the sun is shining.” […] The only objection in this particular case was that this was the first lesson. It must certainly come later, telling you what energy is, but not to such a simple question as “What makes a [toy] dog move?” A child should be given a child’s answer. “Open it up; let’s look at it.””

“Now the point of this is that the result of observation, even if I were unable to come to the ultimate conclusion, was a wonderful piece of gold, with a marvelous result. It was something marvelous. Suppose I were told to observe, to make a list, to write down, to do this, to look, and when I wrote my list down, it was filed with 130 other lists in the back of a notebook. I would learn that the result of observation is relatively dull, that nothing much comes of it. I think it is very important – at least it was to me – that if you are going to teach people to make observations, you should show that something wonderful can come from them. […] [During my life] every once in a while there was the gold of a new understanding that I had learned to expect when I was a kid, the result of observation. For I did not learn that observation was not worthwhile. […] The world looks so different after learning science. For example, the trees are made of air, primarily. When they are burned, they go back to air, and in the flaming heat is released the flaming heat of the sun which was bound in to convert the air into trees, and in the ash is the small remnant of the part which did not come from air, that came from the solid earth, instead. These are beautiful things, and the content of science is wonderfully full of them. They are very inspiring, and they can be used to inspire others.”

“Physicists are trying to find out how nature behaves; they may talk carelessly about some “ultimate particle” because that’s the way nature looks at a given moment, but . . . Suppose people are exploring a new continent, OK? They see water coming along the ground, they’ve seen that before, and they call it “rivers.” So they say they’re exploring to find the headwaters, they go upriver, and sure enough, there they are, it’s all going very well. But lo and behold, when they get up far enough they find the whole system’s different: There’s a great big lake, or springs, or the rivers run in a circle. You might say, “Aha! They’ve failed!” but not at all! The real reason they were doing it was to explore the land. If it turned out not to be headwaters, they might be slightly embarrassed at their carelessness in explaining themselves, but no more than that. As long as it looks like the way things are built is wheels within wheels, then you’re looking for the innermost wheel – but it might not be that way, in which case you’re looking for whatever the hell it is that you find!”

 

June 20, 2019 Posted by | Books, Physics, Science | Leave a comment

Random stuff

i. Your Care Home in 120 Seconds. Some quotes:

“In order to get an overall estimate of mental power, psychologists have chosen a series of tasks to represent some of the basic elements of problem solving. The selection is based on looking at the sorts of problems people have to solve in everyday life, with particular attention to learning at school and then taking up occupations with varying intellectual demands. Those tasks vary somewhat, though they have a core in common.

Most tests include Vocabulary, examples: either asking for the definition of words of increasing rarity; or the names of pictured objects or activities; or the synonyms or antonyms of words.

Most tests include Reasoning, examples: either determining which pattern best completes the missing cell in a matrix (like Raven’s Matrices); or putting in the word which completes a sequence; or finding the odd word out in a series.

Most tests include visualization of shapes, examples: determining the correspondence between a 3-D figure and alternative 2-D figures; determining the pattern of holes that would result from a sequence of folds and a punch through folded paper; determining which combinations of shapes are needed to fill a larger shape.

Most tests include episodic memory, examples: number of idea units recalled across two or three stories; number of words recalled from across 1 to 4 trials of a repeated word list; number of words recalled when presented with a stimulus term in a paired-associate learning task.

Most tests include a rather simple set of basic tasks called Processing Skills. They are rather humdrum activities, like checking for errors, applying simple codes, and checking for similarities or differences in word strings or line patterns. They may seem low grade, but they are necessary when we try to organise ourselves to carry out planned activities. They tend to decline with age, leading to patchy, unreliable performance, and a tendency to muddled and even harmful errors. […]

A brain scan, for all its apparent precision, is not a direct measure of actual performance. Currently, scans are not as accurate in predicting behaviour as is a simple test of behaviour. This is a simple but crucial point: so long as you are willing to conduct actual tests, you can get a good understanding of a person’s capacities even on a very brief examination of their performance. […] There are several tests which have the benefit of being quick to administer and powerful in their predictions.[..] All these tests are good at picking up illness related cognitive changes, as in diabetes. (Intelligence testing is rarely criticized when used in medical settings). Delayed memory and working memory are both affected during diabetic crises. Digit Symbol is reduced during hypoglycaemia, as are Digits Backwards. Digit Symbol is very good at showing general cognitive changes from age 70 to 76. Again, although this is a limited time period in the elderly, the decline in speed is a notable feature. […]

The most robust and consistent predictor of cognitive change within old age, even after control for all the other variables, was the presence of the APOE e4 allele. APOE e4 carriers showed over half a standard deviation more general cognitive decline compared to noncarriers, with particularly pronounced decline in their Speed and numerically smaller, but still significant, declines in their verbal memory.

It is rare to have a big effect from one gene. Few people carry it, and it is not good to have.

ii. What are common mistakes junior data scientists make?

Apparently the OP had second thoughts about this query so s/he deleted the question and marked the thread nsfw (??? …nothing remotely nsfw in that thread…). Fortunately the replies are all still there, there are quite a few good responses in the thread. I added some examples below:

“I think underestimating the domain/business side of things and focusing too much on tools and methodology. As a fairly new data scientist myself, I found myself humbled during this one project where I had I spent a lot of time tweaking parameters and making sure the numbers worked just right. After going into a meeting about it became clear pretty quickly that my little micro-optimizations were hardly important, and instead there were X Y Z big picture considerations I was missing in my analysis.”

[…]

  • Forgetting to check how actionable the model (or features) are. It doesn’t matter if you have amazing model for cancer prediction, if it’s based on features from tests performed as part of the post-mortem. Similarly, predicting account fraud after the money has been transferred is not going to be very useful.

  • Emphasis on lack of understanding of the business/domain.

  • Lack of communication and presentation of the impact. If improving your model (which is a quarter of the overall pipeline) by 10% in reducing customer churn is worth just ~100K a year, then it may not be worth putting into production in a large company.

  • Underestimating how hard it is to productionize models. This includes acting on the models outputs, it’s not just “run model, get score out per sample”.

  • Forgetting about model and feature decay over time, concept drift.

  • Underestimating the amount of time for data cleaning.

  • Thinking that data cleaning errors will be complicated.

  • Thinking that data cleaning will be simple to automate.

  • Thinking that automation is always better than heuristics from domain experts.

  • Focusing on modelling at the expense of [everything] else”

“unhealthy attachments to tools. It really doesn’t matter if you use R, Python, SAS or Excel, did you solve the problem?”

“Starting with actual modelling way too soon: you’ll end up with a model that’s really good at answering the wrong question.
First, make sure that you’re trying to answer the right question, with the right considerations. This is typically not what the client initially told you. It’s (mainly) a data scientist’s job to help the client with formulating the right question.”

iii. Some random wikipedia links: Ottoman–Habsburg wars. Planetshine. Anticipation (genetics). Cloze test. Loop quantum gravity. Implicature. Starfish Prime. Stall (fluid dynamics). White Australia policy. Apostatic selection. Deimatic behaviour. Anti-predator adaptation. Lefschetz fixed-point theorem. Hairy ball theorem. Macedonia naming dispute. Holevo’s theorem. Holmström’s theorem. Sparse matrix. Binary search algorithm. Battle of the Bismarck Sea.

iv. 5-HTTLPR: A Pointed Review. This one is hard to quote, you should read all of it. I did however decide to add a few quotes from the post, as well as a few quotes from the comments:

“…what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

This is why I start worrying when people talk about how maybe the replication crisis is overblown because sometimes experiments will go differently in different contexts. The problem isn’t just that sometimes an effect exists in a cold room but not in a hot room. The problem is more like “you can get an entire field with hundreds of studies analyzing the behavior of something that doesn’t exist”. There is no amount of context-sensitivity that can help this. […] The problem is that the studies came out positive when they shouldn’t have. This was a perfectly fine thing to study before we understood genetics well, but the whole point of studying is that, once you have done 450 studies on something, you should end up with more knowledge than you started with. In this case we ended up with less. […] I think we should take a second to remember that yes, this is really bad. That this is a rare case where methodological improvements allowed a conclusive test of a popular hypothesis, and it failed badly. How many other cases like this are there, where there’s no geneticist with a 600,000 person sample size to check if it’s true or not? How many of our scientific edifices are built on air? How many useless products are out there under the guise of good science? We still don’t know.”

A few more quotes from the comment section of the post:

“most things that are obviously advantageous or deleterious in a major way aren’t gonna hover at 10%/50%/70% allele frequency.

Population variance where they claim some gene found in > [non trivial]% of the population does something big… I’ll mostly tend to roll to disbelieve.

But if someone claims a family/village with a load of weirdly depressed people (or almost any other disorder affecting anything related to the human condition in any horrifying way you can imagine) are depressed because of a genetic quirk… believable but still make sure they’ve confirmed it segregates with the condition or they’ve got decent backing.

And a large fraction of people have some kind of rare disorder […]. Long tail. Lots of disorders so quite a lot of people with something odd.

It’s not that single variants can’t have a big effect. It’s that really big effects either win and spread to everyone or lose and end up carried by a tiny minority of families where it hasn’t had time to die out yet.

Very few variants with big effect sizes are going to be half way through that process at any given time.

Exceptions are

1: mutations that confer resistance to some disease as a tradeoff for something else […] 2: Genes that confer a big advantage against something that’s only a very recent issue.”

“I think the summary could be something like:
A single gene determining 50% of the variance in any complex trait is inherently atypical, because variance depends on the population plus environment and the selection for such a gene would be strong, rapidly reducing that variance.
However, if the environment has recently changed or is highly variable, or there is a trade-off against adverse effects it is more likely.
Furthermore – if the test population is specifically engineered to target an observed trait following an apparently Mendelian inheritance pattern – such as a family group or a small genetically isolated population plus controls – 50% of the variance could easily be due to a single gene.”

v. Less research is needed.

“The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”. These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.” […]

“Here is a quote from a typical genome-wide association study:

“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.”  [1]

The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease. […] We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.”

vi. Why software projects take longer than you think – a statistical model.

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Let’s say you estimate a project to take 1 week. Let’s say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

A reasonable model for the “blowup factor” (actual time divided by estimated time) would be something like a log-normal distribution. If the estimate is one week, then let’s model the real outcome as a random variable distributed according to the log-normal distribution around one week. This has the property that the median of the distribution is exactly one week, but the mean is much larger […] Intuitively the reason the mean is so large is that tasks that complete faster than estimated have no way to compensate for the tasks that take much longer than estimated. We’re bounded by 0, but unbounded in the other direction.”

I like this way to conceptually frame the problem, and I definitely do not think it only applies to software development.

“I filed this in my brain under “curious toy models” for a long time, occasionally thinking that it’s a neat illustration of a real world phenomenon I’ve observed. But surfing around on the interwebs one day, I encountered an interesting dataset of project estimation and actual times. Fantastic! […] The median blowup factor turns out to be exactly 1x for this dataset, whereas the mean blowup factor is 1.81x. Again, this confirms the hunch that developers estimate the median well, but the mean ends up being much higher. […]

If my model is right (a big if) then here’s what we can learn:

  • People estimate the median completion time well, but not the mean.
  • The mean turns out to be substantially worse than the median, due to the distribution being skewed (log-normally).
  • When you add up the estimates for n tasks, things get even worse.
  • Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.”

vii. Attraction inequality and the dating economy.

“…the relentless focus on inequality among politicians is usually quite narrow: they tend to consider inequality only in monetary terms, and to treat “inequality” as basically synonymous with “income inequality.” There are so many other types of inequality that get air time less often or not at all: inequality of talent, height, number of friends, longevity, inner peace, health, charm, gumption, intelligence, and fortitude. And finally, there is a type of inequality that everyone thinks about occasionally and that young single people obsess over almost constantly: inequality of sexual attractiveness. […] One of the useful tools that economists use to study inequality is the Gini coefficient. This is simply a number between zero and one that is meant to represent the degree of income inequality in any given nation or group. An egalitarian group in which each individual has the same income would have a Gini coefficient of zero, while an unequal group in which one individual had all the income and the rest had none would have a Gini coefficient close to one. […] Some enterprising data nerds have taken on the challenge of estimating Gini coefficients for the dating “economy.” […] The Gini coefficient for [heterosexual] men collectively is determined by [-ll-] women’s collective preferences, and vice versa. If women all find every man equally attractive, the male dating economy will have a Gini coefficient of zero. If men all find the same one woman attractive and consider all other women unattractive, the female dating economy will have a Gini coefficient close to one.”

“A data scientist representing the popular dating app “Hinge” reported on the Gini coefficients he had found in his company’s abundant data, treating “likes” as the equivalent of income. He reported that heterosexual females faced a Gini coefficient of 0.324, while heterosexual males faced a much higher Gini coefficient of 0.542. So neither sex has complete equality: in both cases, there are some “wealthy” people with access to more romantic experiences and some “poor” who have access to few or none. But while the situation for women is something like an economy with some poor, some middle class, and some millionaires, the situation for men is closer to a world with a small number of super-billionaires surrounded by huge masses who possess almost nothing. According to the Hinge analyst:

On a list of 149 countries’ Gini indices provided by the CIA World Factbook, this would place the female dating economy as 75th most unequal (average—think Western Europe) and the male dating economy as the 8th most unequal (kleptocracy, apartheid, perpetual civil war—think South Africa).”

Btw., I’m reasonably certain “Western Europe” as most people think of it is not average in terms of Gini, and that half-way down the list should rather be represented by some other region or country type, like, say Mongolia or Bulgaria. A brief look at Gini lists seemed to support this impression.

Quartz reported on this finding, and also cited another article about an experiment with Tinder that claimed that that “the bottom 80% of men (in terms of attractiveness) are competing for the bottom 22% of women and the top 78% of women are competing for the top 20% of men.” These studies examined “likes” and “swipes” on Hinge and Tinder, respectively, which are required if there is to be any contact (via messages) between prospective matches. […] Yet another study, run by OkCupid on their huge datasets, found that women rate 80 percent of men as “worse-looking than medium,” and that this 80 percent “below-average” block received replies to messages only about 30 percent of the time or less. By contrast, men rate women as worse-looking than medium only about 50 percent of the time, and this 50 percent below-average block received message replies closer to 40 percent of the time or higher.

If these findings are to be believed, the great majority of women are only willing to communicate romantically with a small minority of men while most men are willing to communicate romantically with most women. […] It seems hard to avoid a basic conclusion: that the majority of women find the majority of men unattractive and not worth engaging with romantically, while the reverse is not true. Stated in another way, it seems that men collectively create a “dating economy” for women with relatively low inequality, while women collectively create a “dating economy” for men with very high inequality.”

I think the author goes a bit off the rails later in the post, but the data is interesting. It’s however important keeping in mind in contexts like these that sexual selection pressures apply at multiple levels, not just one, and that partner preferences can be non-trivial to model satisfactorily; for example as many women have learned the hard way, males may have very different standards for whom to a) ‘engage with romantically’ and b) ‘consider a long-term partner’.

viii. Flipping the Metabolic Switch: Understanding and Applying Health Benefits of Fasting.

“Intermittent fasting (IF) is a term used to describe a variety of eating patterns in which no or few calories are consumed for time periods that can range from 12 hours to several days, on a recurring basis. Here we focus on the physiological responses of major organ systems, including the musculoskeletal system, to the onset of the metabolic switch – the point of negative energy balance at which liver glycogen stores are depleted and fatty acids are mobilized (typically beyond 12 hours after cessation of food intake). Emerging findings suggest the metabolic switch from glucose to fatty acid-derived ketones represents an evolutionarily conserved trigger point that shifts metabolism from lipid/cholesterol synthesis and fat storage to mobilization of fat through fatty acid oxidation and fatty-acid derived ketones, which serve to preserve muscle mass and function. Thus, IF regimens that induce the metabolic switch have the potential to improve body composition in overweight individuals. […] many experts have suggested IF regimens may have potential in the treatment of obesity and related metabolic conditions, including metabolic syndrome and type 2 diabetes.()”

“In most studies, IF regimens have been shown to reduce overall fat mass and visceral fat both of which have been linked to increased diabetes risk.() IF regimens ranging in duration from 8 to 24 weeks have consistently been found to decrease insulin resistance.(, , , , , , , , , ) In line with this, many, but not all,() large-scale observational studies have also shown a reduced risk of diabetes in participants following an IF eating pattern.”

“…we suggest that future randomized controlled IF trials should use biomarkers of the metabolic switch (e.g., plasma ketone levels) as a measure of compliance and the magnitude of negative energy balance during the fasting period. It is critical for this switch to occur in order to shift metabolism from lipidogenesis (fat storage) to fat mobilization for energy through fatty acid β-oxidation. […] As the health benefits and therapeutic efficacies of IF in different disease conditions emerge from RCTs, it is important to understand the current barriers to widespread use of IF by the medical and nutrition community and to develop strategies for broad implementation. One argument against IF is that, despite the plethora of animal data, some human studies have failed to show such significant benefits of IF over CR [Calorie Restriction].() Adherence to fasting interventions has been variable, some short-term studies have reported over 90% adherence,() whereas in a one year ADMF study the dropout rate was 38% vs 29% in the standard caloric restriction group.()”

ix. Self-repairing cells: How single cells heal membrane ruptures and restore lost structures.

June 2, 2019 Posted by | Astronomy, Biology, Data, Diabetes, Economics, Evolutionary biology, Genetics, Geography, History, Mathematics, Medicine, Physics, Psychology, Statistics, Wikipedia | Leave a comment

Kinematics of Circumgalactic Gas – Crystal Martin

 

A few links related to the lecture coverage:

The green valley is a red herring: Galaxy Zoo reveals two evolutionary pathways towards quenching of star formation in early- and late-type galaxies (Schawinski et al, 2014).
The Large, Oxygen-Rich Halos of Star-Forming Galaxies Are A Major Reservoir of Galactic Metals (Tumlinson et al, 2011).
Gas in galactic halos (Dettmar, 2012).
Gaseous Galaxy Halos (Putman, Peek & Joung, 2012).
The kinematic connection between QSO-absorbing gas and galaxies at intermediate redshift (Steidel et al. 2002).
W. M. Keck Observatory.
Sloan Digital Sky Survey.
Virial mass.
Kinematics of Circumgalactic Gas (the lecturer is a co-author of this presentation).
Kinematics of Circumgalactic Gas: Quasars Probing the Inner CGM of z=0.2 Galaxies (-ll-). Here’s the paper: Quasars Probing Galaxies. I. Signatures of Gas Accretion at Redshift z ≈ 0.2 (Ho, Martin, Kacprzak & Churchill, 2017).
MAGIICAT III. Interpreting Self-Similarity of the Circumgalactic Medium with Virial Mass using MgII Absorption (Nielsen et al, 2013).
Fiducial marker.
Gas kinematics, morphology and angular momentum in the FIRE simulations (El-Badry et al, 2018).

December 22, 2018 Posted by | Astronomy, Lectures, Physics | Leave a comment

Geophysics (II)

In the post I have added some observations from- and links related to the last half of the book’s coverage.

“It is often […] useful to describe a force in terms of the acceleration it produces. Acceleration is the rate of change of velocity; however, when a force acts on a body with a given mass, acceleration is also the force experienced by each unit of mass. For example, a 100 kg man weighs ten times more than a 10 kg child but each experiences the same gravitational acceleration, which is a property of the Earth. The gravitational and centrifugal accelerations have different directions: gravitational acceleration acts inwards towards the Earth’s centre, whereas centrifugal acceleration acts outwards away from the rotation axis. Gravity is the acceleration that results from combining these two accelerations. The direction of gravity defines the local vertical direction […] and thereby the horizontal plane. Due to the different directions of its component accelerations, gravity rarely acts radially towards the centre of the Earth; it only does so at the poles and at the equator. For similar reasons the value of gravity varies with latitude. […] The end result is that gravity is about 0.5 per cent stronger at the poles than at the equator. […] Using the measured values of gravity and the Earth’s radius, in conjunction with the gravitational constant, the mass and volume of the Earth can be obtained. Combining these gives a mean density for the Earth of 5,515 kg/m3. The average density of surface rocks is only half of this value, which implies that density must increase with depth in the Earth. This was an important discovery for scientists concerned with the size and shape of the Earth in the 18th and early 19th centuries. The variation of density with depth in the layered Earth […] was later established from the interpretation of P- and S-wave seismic velocities and the analysis of free oscillations.”

“The Moon’s influence on the Earth’s rotation is stronger than that of the Sun or the other planets in the solar system. The centre of mass of the Earth–Moon pair, called the barycentre, lies at about 4,600 km from the Earth’s centre — well within the Earth’s radius of 6,371 km. The Earth and Moon rotate about this point […]. The elliptical orbit of the Earth about the Sun is in reality the track followed by the barycentre. The rotation of the Earth–Moon pair about their barycentre causes a centrifugal acceleration in the Earth that is directed away from the Moon. The lunar gravitational attraction opposes this and the combined effect is to deform the equipotential surface of the tide and draw it out into the shape of a prolate ellipsoid, resembling a rugby ball. Consequently there is a tidal bulge on the far side of the Earth from the Moon, complementary to the tidal bulge on the near side. The bulges are unequal in size. Each day the Earth rotates under both tidal bulges, so that two unequal tides are experienced; they are resolved into a daily (diurnal) tide and a twice-daily (semi-diurnal) tide. Although we think of the tides as a fluctuation of sea level, they also take place in the solid planet, where they are known as bodily earth tides. These are manifest as displacements of the solid surface by up to 38 cm vertically and 5 cm horizontally. The Sun also contributes to the tides, creating semi-annual and annual components. […] The displacements of fluid and solid mass have a braking effect on the Earth’s rotation, slowing it down and gradually increasing the length of the day, currently at about 1.8 milliseconds per century. […] The reciprocal effect of the Earth’s gravitation on the Moon has slowed lunar rotation about its own axis to the extent that the Moon’s spin now has the same period as its rotation about the Earth. That is why it always presents the same face to us. Conservation of angular momentum results in a transfer of angular momentum from the Earth to the Moon, which is accomplished by an increase in the Earth–Moon distance of about 3.7 cm/yr (roughly the rate at which fingernails grow), and by a slowing of the Moon’s rotation rates about its own axis and about the Earth. In time, all three rotations will be synchronous, with a period of 48 present Earth-days. The Moon will then be stationary over the Earth and both bodies will present the same face to each other.”

“[I]sostatic compensation causes the crust to move vertically to seek a new hydrostatic equilibrium in response to changes in the load on the crust. Thus, when erosion removes surface material or when an ice-cap melts, the isostatic response is uplift of the mountain. Examples of this uplift are found in northern Canada and Fennoscandia, which were covered by a 1–2 kilometre-thick ice sheet during the last ice age; the surface load depressed the crust in these regions by up to 500 m. The ice age ended about 10,000 years ago, and subsequent postglacial isostatic adjustment has resulted in vertical crustal movements. The land uplift was initially faster than it is today, but it continues at rates of up to 9 mm/yr in Scandinavia and Finland […]. The phenomenon has been observed for decades by repeated high-precision levelling campaigns. […] the increase of temperature with depth results in anelastic behaviour of the deeper lithosphere. This is the same kind of behaviour that causes attenuation of seismic waves in the Earth […] A specific type of anelastic behaviour is viscoelasticity. In this mechanism a material responds to short-duration stresses in the same way that an elastic body does, but over very long time intervals it flows like a sticky viscous fluid. The flow of otherwise solid material in the mantle is understood to be a viscoelastic process. This type of behaviour has been invoked to explain the response of the upper mantle to the loading of northern Canada and Fennoscandia by the ice sheets. In each region the weight of an ice sheet depressed the central area, forcing it down into the mantle. The displaced mantle caused the surrounding land to bulge upward slightly, as a jelly does around a point where it is pressed down. As a result of postglacial relaxation the opposite motion is now happening: the peripheral bulge is sinking while the central region is being uplifted.”

“The molecules of an object are in constant motion and the energy of this motion is called kinetic energy. Temperature is a measure of the average kinetic energy of the molecules in a given volume. […] The total energy of motion of all the molecules in a volume is its internal energy. When two objects with different temperatures are in contact, they exchange internal energy until they have the same temperature. The energy transferred is the amount of heat exchanged. Thus, if heat is added to an object, its kinetic energy is increased, the motions of individual atoms and molecules speed up, and its temperature rises. Heat is a form of energy and is therefore measured in the standard energy unit, the joule. The expenditure of one joule per second defines a watt, the unit of power. […] The amount of geothermal heat flowing per second across a unit of surface area of the Earth is called the geothermal flux, or more simply the heat flow. It is measured in mW/m2. The Earth’s internal heat is its greatest source of energy. It powers global geological processes such as plate tectonics and the generation of the geomagnetic field. The annual amount of heat flowing out of the Earth is more than 100 times greater than the elastic energy released in earthquakes and ten times greater than the loss of kinetic energy as the planet’s rotation slows due to tidal friction. Although the solar radiation that falls on the Earth is a much larger source of energy, it is important mainly for its effect on natural processes at or above the Earth’s surface. The atmosphere and clouds reflect or absorb about 45 per cent of solar radiation, and the land and ocean surfaces reflect a further 5 per cent and absorb 50 per cent. Almost all of the energy absorbed at the surface and in the clouds and atmosphere is radiated back into space. The solar energy that reaches the surface penetrates only a short distance into the ground, because water and rocks are poor conductors of heat. […] The daily temperature fluctuation in rocks and sediments sinks to less than 1 per cent of its surface amplitude in a depth of only 1 metre. The annual seasonal change of temperature penetrates some nineteen times deeper, but its effects are barely felt below 20 m.”

“The [Earth’s] internal heat arises from two sources. Part is produced at the present time by radioactivity in crustal rocks and in the mantle, and part is primordial. […] The internal heat has to find its way out of the Earth. The three basic forms of heat transfer are radiation, conduction, and convection. Heat is also transferred in compositional and phase transitions. […] Heat is transported throughout the interior by conduction, and convection plays an important role in the mantle and fluid outer core. […] Heat transport by conduction is most important in solid regions of the Earth. Thermal conduction takes place by transferring energy in the vibrations of atoms, or in collisions between molecules, without bodily displacement of the material. The flow of heat through a material by conduction depends on two quantities: the rate at which temperature increases with depth (the temperature gradient), and the material’s ability to conduct heat, a physical property known as thermal conductivity. The product of the temperature gradient and the thermal conductivity defines the heat flow. […] Heat flow varies greatly over the Earth’s surface depending on the local geology and tectonic situation. The estimated average heat flow is 92 mW/m2. Multiplying this value by the Earth’s surface area, which is about 510 million km2, gives a global heat loss of about 47,000 GW […]. For comparison, the energy production of a large nuclear power plant is about 1 GW.”

An adiabatic thermal process is one in which heat is neither gained nor lost. This can be the case when a process occurs too quickly to allow heat to be exchanged, as in the rapid compressions and expansions during the passage of a seismic wave. The variation of temperature with depth under adiabatic conditions defines the adiabatic temperature gradient. […] Consider what would happen in a fluid if the temperature increases with depth more rapidly than the adiabatic gradient. If a small parcel of material at a particular depth is moved upward adiabatically to a shallower depth, it experiences a drop in pressure corresponding to the depth difference and a corresponding adiabatic decrease in temperature. However, the decrease is not as large as required by the real temperature gradient, so the adiabatically displaced parcel is now hotter and less dense than its environment. It experiences a buoyant uplift and continues to rise, losing heat and increasing in density until it is in equilibrium with its surroundings. Meanwhile, cooler material adjacent to its original depth fills the vacated place, closing the cycle. This process of heat transport, in which material and heat are transported together, is thermal convection. Eventually the loss of heat by convection brings the real temperature gradient close to the adiabatic gradient. Consequently, a well-mixed, convecting fluid has a temperature profile close to the adiabatic curve. Convection is the principal method of heat transport in the Earth’s fluid outer core. Convection is also an important process of heat transport in the mantle. […] Mantle convection plays a crucial role in the cooling history and evolution of the planet.”

“It is important to appreciate the timescale on which flow occurs in the mantle. The rate is quite different from the familiar flow of a sticky liquid, such as blood or motor oil […]. The mantle is vastly stiffer. Estimates of viscosity for the lower mantle are around 1022 Pa·s (pascal seconds), which is 1025 times that of water. This is an enormous factor (similar to the ratio of the mass of the entire Earth to a kilogram mass). The viscosity varies within the mantle, with the upper mantle about 20 times less viscous than the lower mantle. Flow takes place in the mantle by the migration of defects through the otherwise solid material. This is a slow process that produces flow rates on the order of centimetres per year. However, geological processes occur on a very long timescale, spanning tens or hundreds of millions of years. This allows convection to be an important factor in the transport of heat through the mantle.”

“The Sun has a strong magnetic field, greatly exceeding that of any planet. It arises from convection in the solar core and is sufficiently irregular that it produces regions of lower than normal temperature on the Sun’s surface, called sunspots. These affect the release of charged particles (electrons, protons, and alpha particles) from the Sun’s atmosphere. The particles are not bound to each other, but form a plasma that spreads out at supersonic speed. The flow of electric charge is called the solar wind; it is accompanied by a magnetic field known as the interplanetary magnetic field. The solar emissions are variable, controlled by changes in the Sun’s magnetic field. […] The magnetic field of a planet deflects the solar wind around it. This blocks the influx of solar radiation and prevents the atmosphere from being blown away […] Around the Earth (as well as the giant planets and Mercury) the region in which the planet’s magnetic field is stronger than the interplanetary field is called the magnetosphere; its shape resembles the bow-wave and wake of a moving ship. […] It compresses the field on the daytime side of the Earth, forming a bow shock, about 17 km thick, which deflects most of the solar wind around the planet. However, some of the plasma penetrates the barrier and forms a region called the magnetosheath; the boundary between the plasma and the magnetic field is called the magnetopause. The solar wind causes the magnetic field on the night-time side of the Earth to stretch out to form a magnetotail […] that extends several million kilometres ‘downwind’ from the Earth. Similar features characterize the magnetic fields of other planets. […] Rotation and the related Coriolis force, together with convection, are necessary factors for a self-sustaining dynamo”.

Links:
Gravity. Inertial force. Centrifugal force. Centripetal force.
Gravimeter. Gal (unit).
Reference ellipsoid. Undulation of the geoid. Satellite geodesy. Interferometric synthetic-aperture radar. Global Positioning System. Galileo. GLONASS. Differential GPS.
Gravity Recovery and Climate Experiment (GRACE). Gravity Field and Steady-State Ocean Circulation Explorer (GOCE).
Gradiometer.
Gravity surveying. Bouguer anomaly. Free-air gravity anomaly. Eötvös effect.
Isostasy.
Craton.
Solidus.
Diamond anvil cell.
Mantle plume.
Hotspot (geology).
Magnetism.
Earth’s magnetic field.
International Geomagnetic Reference Field.
Telluric current. Magnetotellurics.
SWARM mission.
Ferromagnetism. Curie point.
Paleomagnetism. Plate tectonics. Vine–Matthews–Morley hypothesis.
Geomagnetic reversal (“During the past 10 Myr there have been on average 4-5 reversals per Myr; the most recent full reversal happened 780,000 yr ago.”).
Magnetostratigraphy.

November 24, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Geophysics (I)

“Geophysics is a field of earth sciences that uses the methods of physics to investigate the physical properties of the Earth and the processes that have determined and continue to govern its evolution. Geophysical investigations cover a wide range of research fields, extending from surface changes that can be observed from Earth-orbiting satellites to unseen behaviour in the Earth’s deep interior. […] This book presents a general overview of the principal methods of geophysics that have contributed to our understanding of Planet Earth and how it works.”

I gave this book five stars on goodreads, where I deemed it: ‘An excellent introduction to the topic, with high-level yet satisfactorily detailed coverage of many areas of interest.’ It doesn’t cover these topics in the amount of detail they’re covered in books like Press & Siever (…a book which I incidentally covered, though not in much detail, here and here), but it’s a very decent introductory book on these topics. I have added some observations and links related to the first half of the book’s coverage below.

“The gravitational attractions of the other planets — especially Jupiter, whose mass is 2.5 times the combined mass of all the other planets — influence the Earth’s long-term orbital rotations in a complex fashion. The planets move with different periods around their differently shaped and sized orbits. Their gravitational attractions impose fluctuations on the Earth’s orbit at many frequencies, a few of which are more significant than the rest. One important effect is on the obliquity: the amplitude of the axial tilt is forced to change rhythmically between a maximum of 24.5 degrees and a minimum of 22.1 degrees with a period of 41,000 yr. Another gravitational interaction with the other planets causes the orientation of the elliptical orbit to change with respect to the stars […]. The line of apsides — the major axis of the ellipse — precesses around the pole to the ecliptic in a prograde sense (i.e. in the same sense as the Earth’s rotation) with a period of 100,000 yr. This is known as planetary precession. Additionally, the shape of the orbit changes with time […], so that the eccentricity varies cyclically between 0.005 (almost circular) and a maximum of 0.058; currently it is 0.0167 […]. The dominant period of the eccentricity fluctuation is 405,000 yr, on which a further fluctuation of around 100,000 yr is superposed, which is close to the period of the planetary precession.”

“The amount of solar energy received by a unit area of the Earth’s surface is called the insolation. […] The long-term fluctuations in the Earth’s rotation and orbital parameters influence the insolation […] and this causes changes in climate. When the obliquity is smallest, the axis is more upright with respect to the ecliptic than at present. The seasonal differences are then smaller and vary less between polar and equatorial regions. Conversely, a large axial tilt causes an extreme difference between summer and winter at all latitudes. The insolation at any point on the Earth thus changes with the obliquity cycle. Precession of the axis also changes the insolation. At present the north pole points away from the Sun at perihelion; one half of a precessional cycle later it will point away from the Sun at aphelion. This results in a change of insolation and an effect on climate with a period equal to that of the precession. The orbital eccentricity cycle changes the Earth–Sun distances at perihelion and aphelion, with corresponding changes in insolation. When the orbit is closest to being circular, the perihelion–aphelion difference in insolation is smallest, but when the orbit is more elongate this difference increases. In this way the changes in eccentricity cause long-term variations in climate. The periodic climatic changes due to orbital variations are called Milankovitch cycles, after the Serbian astronomer Milutin Milankovitch, who studied them systematically in the 1920s and 1930s. […] The evidence for cyclical climatic variations is found in geological sedimentary records and in long cores drilled into the ice on glaciers and in polar regions. […] Sedimentation takes place slowly over thousands of years, during which the Milankovitch cycles are recorded in the physical and chemical properties of the sediments. Analyses of marine sedimentary sequences deposited in the deep oceans over millions of years have revealed cyclical variations in a number of physical properties. Examples are bedding thickness, sediment colour, isotopic ratios, and magnetic susceptibility. […] The records of oxygen isotope ratios in long ice cores display Milankovitch cycles and are important evidence for the climatic changes, generally referred to as orbital forcing, which are brought about by the long-term variations in the Earth’s orbit and axial tilt.”

Stress is defined as the force acting on a unit area. The fractional deformation it causes is called strain. The stress–strain relationship describes the mechanical behaviour of a material. When subjected to a low stress, materials deform in an elastic manner so that stress and strain are proportional to each other and the material returns to its original unstrained condition when the stress is removed. Seismic waves usually propagate under conditions of low stress. If the stress is increased progressively, a material eventually reaches its elastic limit, beyond which it cannot return to its unstrained state. Further stress causes disproportionately large strain and permanent deformation. Eventually the stress causes the material to reach its breaking point, at which it ruptures. The relationship between stress and strain is an important aspect of seismology. Two types of elastic deformation—compressional and shear—are important in determining how seismic waves propagate in the Earth. Imagine a small block that is subject to a deforming stress perpendicular to one face of the block; this is called a normal stress. The block shortens in the direction it is squeezed, but it expands slightly in the perpendicular direction; when stretched, the opposite changes of shape occur. These reversible elastic changes depend on how the material responds to compression or tension. This property is described by a physical parameter called the bulk modulus. In a shear deformation, the stress acts parallel to the surface of the block, so that one edge moves parallel to the opposite edge, changing the shape but not the volume of the block. This elastic property is described by a parameter called the shear modulus. An earthquake causes normal and shear strains that result in four types of seismic wave. Each type of wave is described by two quantities: its wavelength and frequency. The wavelength is the distance between successive peaks of a vibration, and the frequency is the number of vibrations per second. Their product is the speed of the wave.”

“A seismic P-wave (also called a primary, compressional, or longitudinal wave) consists of a series of compressions and expansions caused by particles in the ground moving back and forward parallel to the direction in which the wave travels […] It is the fastest seismic wave and can pass through fluids, although with reduced speed. When it reaches the Earth’s surface, a P-wave usually causes nearly vertical motion, which is recorded by instruments and may be felt by people but usually does not result in severe damage. […] A seismic S-wave (i.e. secondary or shear wave) arises from shear deformation […] It travels by means of particle vibrations perpendicular to the direction of travel; for that reason it is also known as a transverse wave. The shear wave vibrations are further divided into components in the horizontal and vertical planes, labelled the SH- and SV-waves, respectively. […] an S-wave is slower than a P-wave, propagating about 58 per cent as fast […] Moreover, shear waves can only travel in a material that supports shear strain. This is the case for a solid object, in which the molecules have regular locations and intermolecular forces hold the object together. By contrast, a liquid (or gas) is made up of independent molecules that are not bonded to each other, and thus a fluid has no shear strength. For this reason S-waves cannot travel through a fluid. […] S-waves have components in both the horizontal and vertical planes, so when they reach the Earth’s surface they shake structures from side to side as well as up and down. They can have larger amplitudes than P-waves. Buildings are better able to resist up-and-down motion than side-to-side shaking, and as a result SH-waves can cause serious damage to structures. […] Surface waves spread out along the Earth’s surface around a point – called the epicentre – located vertically above the earthquake’s source […] Very deep earthquakes usually do not produce surface waves, but the surface waves caused by shallow earthquakes are very destructive. In contrast to seismic body waves, which can spread out in three dimensions through the Earth’s interior, the energy in a seismic surface wave is guided by the free surface. It is only able to spread out in two dimensions and is more concentrated. Consequently, surface waves have the largest amplitudes on the seismogram of a shallow earthquake […] and are responsible for the strongest ground motions and greatest damage. There are two types of surface wave. [Rayleigh waves & Love waves, US].”

“The number of earthquakes that occur globally each year falls off with increasing magnitude. Approximately 1.4 million earthquakes annually have magnitude 2 or larger; of these about 1,500 have magnitude 5 or larger. The number of very damaging earthquakes with magnitude above 7 varies from year to year but has averaged about 15-20 annually since 1900. On average, one earthquake per year has magnitude 8 or greater, although such large events occur at irregular intervals. A magnitude 9 earthquake may release more energy than the cumulative energy of all other earthquakes in the same year. […] Large earthquakes may be preceded by foreshocks, which are lesser events that occur shortly before and in the same region as the main shock. They indicate the build-up of stress that leads to the main rupture. Large earthquakes are also followed by smaller aftershocks on the same fault or near to it; their frequency decreases as time passes, following the main shock. Aftershocks may individually be large enough to have serious consequences for a damaged region, because they can cause already weakened structures to collapse. […] About 90 per cent of the world’s earthquakes and 75 per cent of its volcanoes occur in the circum-Pacific belt known as the ‘Ring of Fire‘. […] The relative motions of the tectonic plates at their margins, together with changes in the state of stress within the plates, are responsible for most of the world’s seismicity. Earthquakes occur much more rarely in the geographic interiors of the plates. However, large intraplate earthquakes do occur […] In 2001 an intraplate earthquake with magnitude 7.7 occurred on a previously unknown fault under Gujarat, India […], killing 20,000 people and destroying 400,000 homes. […] Earthquakes are a serious hazard for populations, their property, and the natural environment. Great effort has been invested in the effort to predict their occurrence, but as yet without general success. […] Scientists have made more progress in assessing the possible location of an earthquake than in predicting the time of its occurrence. Although a damaging event can occur whenever local stress in the crust exceeds the breaking point of underlying rocks, the active seismic belts where this is most likely to happen are narrow and well defined […]. Unfortunately many densely populated regions and great cities are located in some of the seismically most active regions.[…] it is not yet possible to forecast reliably where or when an earthquake will occur, or how large it is likely to be.”

Links:

Plate tectonics.
Geodesy.
Seismology. Seismometer.
Law of conservation of energy. Second law of thermodynamics (This book incidentally covers these topics in much more detail, and does it quite well – US).
Angular momentum.
Big Bang model. Formation and evolution of the Solar System (…I should probably mention here that I do believe Wikipedia covers these sorts of topics quite well).
Invariable plane. Ecliptic.
Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Potential energy. Kinetic energy. Orbital eccentricity. Line of apsides. Axial tilt. Figure of the Earth. Nutation. Chandler wobble.
Torque. Precession.
Very-long-baseline interferometry.
Reflection seismology.
Geophone.
Seismic shadow zone. Ray tracing (physics).
Structure of the Earth. Core–mantle boundary. D” region. Mohorovičić discontinuity. Lithosphere. Asthenosphere. Mantle transition zone.
Peridotite. Olivine. Perovskite.
Seismic tomography.
Lithoprobe project.
Orogenic belt.
European Geotraverse ProjectEuropean Geotraverse Project.
Microseism. Seismic noise.
Elastic-rebound theory. Fault (geology).
Richter magnitude scale (…of note: “the Richter scale underestimates the size of very large earthquakes with magnitudes greater than about 8.5”). Seismic moment. Moment magnitude scale. Modified Mercalli intensity scale. European macroseismic scale.
Focal mechanism.
Transform fault. Euler pole. Triple junction.
Megathrust earthquake.
Alpine fault. East African Rift.

November 1, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Black Hole Magnetospheres

The lecturer says ‘ah’ and ‘ehm’ a lot, especially in the beginning (it gets much better later in the talk), but this is not a good reason for not watching the lecture. The last five minutes of the lecture after the wrap-up can safely be skipped without missing out on anything.

I’ve added some links related to the coverage below.

Astrophysical jet.
Magnetosphere.
The Optical Variability of the Quasar 3C 279: The Signature of a Decelerating Jet? (Böttcher & Principe, 2009).
The slope of the black-hole mass versus velocity dispersion correlation (Tremaine et al., 2002).
Radio-Loudness of Active Galactic Nuclei: Observational Facts and Theoretical Implications (Sikora, Stawarz & Lasota, 2007).
Jet Launching Structure Resolved Near the Supermassive Black Hole in M87 (Doeleman et al., 2012).
Event Horizon Telescope.
The effective acceleration of plasma outflow in the paraboloidal magnetic field (Beskin & Nokhrina, 2006).
Toroidal magnetic field.
Current sheet.
No-hair theorem.
Frame-dragging.
Alfvén velocity.
Lorentz factor.
Magnetic acceleration of ultrarelativistic jets in gamma-ray burst sources (Komissarov et al., 2009).
Asymptotic domination of cold relativistic MHD winds by kinetic energy flux (Begelman & Li, 1994).
Magnetic nozzle.
Mach cone.
Collimated beam.
Magnetohydrodynamic simulations of gamma-ray burst jets: Beyond the progenitor star (Tchekhovskoy, Narayan & McKinney, 2010).

October 31, 2018 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Perception (I)

Here’s my short goodreads review of the book. In this post I’ll include some observations and links related to the first half of the book’s coverage.

“Since the 1960s, there have been many attempts to model the perceptual processes using computer algorithms, and the most influential figure of the last forty years has been David Marr, working at MIT. […] Marr and his colleagues were responsible for developing detailed algorithms for extracting (i) low-level information about the location of contours in the visual image, (ii) the motion of those contours, and (iii) the 3-D structure of objects in the world from binocular disparities and optic flow. In addition, one of his lasting achievements was to encourage researchers to be more rigorous in the way that perceptual tasks are described, analysed, and formulated and to use computer models to test the predictions of those models against human performance. […] Over the past fifteen years, many researchers in the field of perception have characterized perception as a Bayesian process […] According to Bayesian theory, what we perceive is a consequence of probabilistic processes that depend on the likelihood of certain events occurring in the particular world we live in. Moreover, most Bayesian models of perceptual processes assume that there is noise in the sensory signals and the amount of noise affects the reliability of those signals – the more noise, the less reliable the signal. Over the past fifteen years, Bayes theory has been used extensively to model the interaction between different discrepant cues, such as binocular disparity and texture gradients to specify the slant of an inclined surface.”

“All surfaces have the property of reflectance — that is, the extent to which they reflect (rather than absorb) the incident illumination — and those reflectances can vary between 0 per cent and 100 per cent. Surfaces can also be selective in the particular wavelengths they reflect or absorb. Our colour vision depends on these selective reflectance properties […]. Reflectance characteristics describe the physical properties of surfaces. The lightness of a surface refers to a perceptual judgement of a surface’s reflectance characteristic — whether it appears as black or white or some grey level in between. Note that we are talking about the perception of lightness — rather than brightness — which refers to our estimate of how much light is coming from a particular surface or is emitted by a source of illumination. The perception of surface lightness is one of the most fundamental perceptual abilities because it allows us not only to differentiate one surface from another but also to identify the real-world properties of a particular surface. Many textbooks start with the observation that lightness perception is a difficult task because the amount of light reflected from a particular surface depends on both the reflectance characteristic of the surface and the intensity of the incident illumination. For example, a piece of black paper under high illumination will reflect back more light to the eye than a piece of white paper under dim illumination. As a consequence, lightness constancy — the ability to correctly judge the lightness of a surface under different illumination conditions — is often considered to be an ‘achievement’ of the perceptual system. […] The alternative starting point for understanding lightness perception is to ask whether there is something that remains constant or invariant in the patterns of light reaching the eye with changes of illumination. In this case, it is the relative amount of light reflected off different surfaces. Consider two surfaces that have different reflectances—two shades of grey. The actual amount of light reflected off each of the surfaces will vary with changes in the illumination but the relative amount of light reflected off the two surfaces remains the same. This shows that lightness perception is necessarily a spatial task and hence a task that cannot be solved by considering one particular surface alone. Note that the relative amount of light reflected off different surfaces does not tell us about the absolute lightnesses of different surfaces—only their relative lightnesses […] Can our perception of lightness be fooled? Yes, of course it can and the ways in which we make mistakes in our perception of the lightnesses of surfaces can tell us much about the characteristics of the underlying processes.”

“From a survival point of view, the ability to differentiate objects and surfaces in the world by their ‘colours’ (spectral reflectance characteristics) can be extremely useful […] Most species of mammals, birds, fish, and insects possess several different types of receptor, each of which has a a different spectral sensitivity function […] having two types of receptor with different spectral sensitivities is the minimum necessary for colour vision. This is referred to as dicromacy and the majority of mammals are dichromats with the exception of the old world monkeys and humans. […] The only difference between lightness and colour perception is that in the latter case we have to consider the way a surface selectively reflects (and absorbs) different wavelengths, rather than just a surface’s average reflectance over all wavelengths. […] The similarities between the tasks of extracting lightness and colour information mean that we can ask a similar question about colour perception [as we did about lightness perception] – what is the invariant information that could specify the reflectance characteristic of a surface? […] The information that is invariant under changes of spectral illumination is the relative amounts of long, medium, and short wavelength light reaching our eyes from different surfaces in the scene. […] the successful identification and discrimination of coloured surfaces is dependent on making spatial comparisons between the amounts of short, medium, and long wavelength light reaching our eyes from different surfaces. As with lightness perception, colour perception is necessarily a spatial task. It follows that if a scene is illuminated by the light of just a single wavelength, the appropriate spatial comparisons cannot be made. This can be demonstrated by illuminating a real-world scene containing many different coloured objects with yellow, sodium light that contains only a single wavelength. All objects, whatever their ‘colours’, will only reflect back to the eye different intensities of that sodium light and hence there will only be absolute but no relative differences between the short, medium, and long wavelength lightness records. There is a similar, but less dramatic, effect on our perception of colour when the spectral characteristics of the illumination are restricted to just a few wavelengths, as is the case with fluorescent lighting.”

“Consider a single receptor mechanism, such as a rod receptor in the human visual system, that responds to a limited range of wavelengths—referred to as the receptor’s spectral sensitivity function […]. This hypothetical receptor is more sensitive to some wavelengths (around 550 nm) than others and we might be tempted to think that a single type of receptor could provide information about the wavelength of the light reaching the receptor. This is not the case, however, because an increase or decrease in the response of that receptor could be due to either a change in the wavelength or an increase or decrease in the amount of light reaching the receptor. In other words, the output of a given receptor or receptor type perfectly confounds changes in wavelength with changes in intensity because it has only one way of responding — that is, more or less. This is Rushton’s Principle of Univariance — there is only one way of varying or one degree of freedom. […] On the other hand, if we consider a visual system with two different receptor types, one more sensitive to longer wavelengths (L) and the other more sensitive to shorter wavelengths (S), there are two degrees of freedom in the system and thus the possibility of signalling our two independent variables — wavelength and intensity […] it is quite possible to have a colour visual system that is based on just two receptor types. Such a colour visual system is referred to as dichromatic.”

“So why is the human visual system trichromatic? The answer can be found in a phenomenon known as metamerism. So far, we have restricted our discussion to the effect of a single wavelength on our dichromatic visual system: for example, a single wavelength of around 550 nm that stimulated both the long and short receptor types about equally […]. But what would happen if we stimulated our dichromatic system with light of two different wavelengths at the same time — one long wavelength and one short wavelength? With a suitable choice of wavelengths, this combination of wavelengths would also have the effect of stimulating the two receptor types about equally […] As a consequence, the output of the system […] with this particular mixture of wavelengths would be indistinguishable from that created by the single wavelength of 550 nm. These two indistinguishable stimulus situations are referred to as metamers and a little thought shows that there would be many thousands of combinations of wavelength that produce the same activity […] in a dichromatic visual system. As a consequence, all these different combinations of wavelengths would be indistinguishable to a dichromatic observer, even though they were produced by very different combinations of wavelengths. […] Is there any way of avoiding the problem of metamerism? The answer is no but we can make things better. If a visual system had three receptor types rather than two, then many of the combinations of wavelengths that produce an identical pattern of activity in two of the mechanisms (L and S) would create a different amount of activity in our third receptor type (M) that is maximally sensitive to medium wavelengths. Hence the number of indistinguishable metameric matches would be significantly reduced but they would never be eliminated. Using the same logic, it follows that a further increase in the number of receptor types (beyond three) would reduce the problem of metamerism even more […]. There would, however, also be a cost. Having more distinct receptor types in a finite-sized retina would increase the average spacing between the receptors of the same type and thus make our acuity for fine detail significantly poorer. There are many species, such as dragonflies, with more than three receptor types in their eyes but the larger number of receptor types typically serves to increase the range of wavelengths to which the animal is sensitive into the infra-red or ultra-violet parts of the spectrum, rather than to reduce the number of metamers. […] the sensitivity of the short wavelength receptors in the human eye only extends to ~540 nm — the S receptors are insensitive to longer wavelengths. This means that human colour vision is effectively dichromatic for combinations of wavelengths above 540 nm. In addition, there are no short wavelength cones in the central fovea of the human retina, which means that we are also dichromatic in the central part of our visual field. The fact that we are unaware of this lack of colour vision is probably due to the fact that our eyes are constantly moving. […] It is […] important to appreciate that the description of the human colour visual system as trichromatic is not a description of the number of different receptor types in the retina – it is a property of the whole visual system.”

“Recent research has shown that although the majority of humans are trichromatic there can be significant differences in the precise matches that individuals make when matching colour patches […] the absence of one receptor type will result in a greater number of colour confusions than normal and this does have a significant effect on an observer’s colour vision. Protanopia is the absence of long wavelength receptors, deuteranopia the absence of medium wavelength receptors, and tritanopia the absence of short wavelength receptors. These three conditions are often described as ‘colour blindness’ but this is a misnomer. We are all colour blind to some extent because we all suffer from colour metamerism and fail to make discriminations that would be very apparent to any biological or machine vision system with a greater number of receptor types. For example, most stomatopod crustaceans (mantis shrimps) have twelve different visual pigments and they also have the ability to detect both linear and circularly polarized light. What I find interesting is that we believe, as trichromats, that we have the ability to discriminate all the possible shades of colour (reflectance characteristics) that exist in our world. […] we are typically unaware of the limitations of our visual systems because we have no way of comparing what we see normally with what would be seen by a ‘better’ visual system.”

“We take it for granted that we are able to segregate the visual input into separate objects and distinguish objects from their backgrounds and we rarely make mistakes except under impoverished conditions. How is this possible? In many cases, the boundaries of objects are defined by changes of luminance and colour and these changes allow us to separate or segregate an object from its background. But luminance and colour changes are also present in the textured surfaces of many objects and therefore we need to ask how it is that our visual system does not mistake these luminance and colour changes for the boundaries of objects. One answer is that object boundaries have special characteristics. In our world, most objects and surfaces are opaque and hence they occlude (cover) the surface of the background. As a consequence, the contours of the background surface typically end—they are ‘terminated’—at the boundary of the occluding object or surface. Quite often, the occluded contours of the background are also revealed at the opposite side of the occluding surface because they are physically continuous. […] The impression of occlusion is enhanced if the occluded contours contain a range of different lengths, widths, and orientations. In the natural world, many animals use colour and texture to camouflage their boundaries as well as to fool potential predators about their identity. […] There is an additional source of information — relative motion — that can be used to segregate a visual scene into objects and their backgrounds and to break any camouflage that might exist in a static view. A moving, opaque object will progressively occlude and dis-occlude (reveal) the background surface so that even a well-camouflaged, moving animal will give away its location. Hence it is not surprising that a very common and successful strategy of many animals is to freeze in order not to be seen. Unless the predator has a sophisticated visual system to break the pattern or colour camouflage, the prey will remain invisible.”

Some links:

Perception.
Ames room. Inverse problem in optics.
Hermann von Helmholtz. Richard Gregory. Irvin Rock. James Gibson. David Marr. Ewald Hering.
Optical flow.
La dioptrique.
Necker cube. Rubin’s vase.
Perceptual constancy. Texture gradient.
Ambient optic array.
Affordance.
Luminance.
Checker shadow illusion.
Shape from shading/Photometric stereo.
Colour vision. Colour constancy. Retinex model.
Cognitive neuroscience of visual object recognition.
Motion perception.
Horace Barlow. Bernhard Hassenstein. Werner E. Reichardt. Sigmund Exner. Jan Evangelista Purkyně.
Phi phenomenon.
Motion aftereffect.
Induced motion.

October 14, 2018 Posted by | Biology, Books, Ophthalmology, Physics, Psychology | Leave a comment

Supermassive BHs Mergers

This is the first post I’ve posted in a while; as mentioned earlier the blogging hiatus was due to internet connectivity issues secondary to me moving. Those issues should now have been solved and I hope to soon get back to blogging regularly.

Some links related to the lecture’s coverage:

Supermassive black hole.
Binary black hole. Final parsec problem.
LIGO (Laser Interferometer Gravitational-Wave Observatory). Laser Interferometer Space Antenna (LISA).
Dynamical friction.
Science with the space-based interferometer eLISA: Supermassive black hole binaries (Klein et al., 2016).
Off the Beaten Path: A New Approach to Realistically Model The Orbital Decay of Supermassive Black Holes in Galaxy Formation Simulations (Tremmel et al., 2015).
Dancing to ChaNGa: A Self-Consistent Prediction For Close SMBH Pair Formation Timescales Following Galaxy Mergers (Tremmel et al., 2017).
Growth and activity of black holes in galaxy mergers with varying mass ratios (Capelo et al., 2015).
Tidal heating. Tidal stripping.
Nuclear coups: dynamics of black holes in galaxy mergers (Wassenhove et al., 2013).
The birth of a supermassive black hole binary (Pfister et al., 2017).
Massive black holes and gravitational waves (I assume this is the lecturer’s own notes for a similar talk held at another point in time – there’s a lot of overlap between these notes and stuff covered in the lecture, so if you’re curious you could go have a look. As far as I could see all figures in the second half of the link, as well as a few of the earlier ones, are figures which were also included in this lecture).

September 18, 2018 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Lyapunov Arguments in Optimization

I’d say that if you’re interested in the intersection of mathematical optimization methods/-algorithms and dynamical systems analysis it’s probably a talk well worth watching. The lecture is reasonably high-level and covers a fairly satisfactory amount of ground in a relatively short amount of time, and it is not particularly hard to follow if you have at least some passing familiarity with the fields involved (dynamical systems analysis, statistics, mathematical optimization, computer science/machine learning).

Some links:

Dynamical system.
Euler–Lagrange equation.
Continuous optimization problem.
Gradient descent algorithm.
Lyapunov stability.
Condition number.
Fast (/accelerated-) gradient descent methods.
The Mirror Descent Algorithm.
Cubic regularization of Newton method and its global performance (Nesterov & Polyak).
A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights (Su, Boyd & Candès).
A Variational Perspective on Accelerated Methods in Optimization (Wibisono, Wilson & Jordan).
Breaking Locality Accelerates Block Gauss-Seidel (Tu, Venkataraman, Wilson, Gittens, Jordan & Recht).
A Lyapunov Analysis of Momentum Methods in Optimization (Wilson, Recht & Jordan).
Bregman divergence.
Estimate sequence methods.
Variance reduction techniques.
Stochastic gradient descent.
Langevin dynamics.

 

July 22, 2018 Posted by | Computer science, Lectures, Mathematics, Physics, Statistics | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Oceans (I)

I read this book quite some time ago, but back when I did I never blogged it; instead I just added a brief review on goodreads. I remember that the main reason why I decided against blogging it shortly after I’d read it was that the coverage overlapped a great deal with Mladenov’s marine biology text, which I had at that time just read and actually did blog in some detail. I figured if I wanted to blog this book as well I would be well-advised to wait a while, so that I’d at least have forget some of the stuff first – that way blogging the book might end up serving as a review of stuff I’d forgot, rather than as a review of stuff that would still be fresh in my memory and so wouldn’t really be worth reviewing anyway. So now here we are a few months later, and I have come to think it might be a good idea to blog the book.

Below I have added some quotes from the first half of the book and some links to topics/people/etc. covered.

“Several methods now exist for calculating the rate of plate motion. Most reliable for present-day plate movement are direct observations made using satellites and laser technology. These show that the Atlantic Ocean is growing wider at a rate of between 2 and 4 centimetres per year (about the rate at which fingernails grow), the Indian Ocean is attempting to grow at a similar rate but is being severely hampered by surrounding plate collisions, while the fastest spreading centre is the East Pacific Rise along which ocean crust is being created at rates of around 17 centimetres per year (the rate at which hair grows). […] The Nazca plate has been plunging beneath South America for at least 200 million years – the imposing Andes, the longest mountain chain on Earth, is the result. […] By around 120 million years ago, South America and Africa began to drift apart and the South Atlantic was born. […] sea levels rose higher than at any time during the past billion years, perhaps as much as 350 metres higher than today. Only 18 per cent of the globe was dry land — 82 per cent was under water. These excessively high sea levels were the result of increased spreading activity — new oceans, new ridges, and faster spreading rates all meant that the mid-ocean ridge systems collectively displaced a greater volume of water than ever before. Global warming was far more extreme than today. Temperatures in the ocean rose to around 30°C at the equator and as much as 14°C at the poles. Ocean circulation was very sluggish.”

“The land–ocean boundary is known as the shoreline. Seaward of this, all continents are surrounded by a broad, flat continental shelf, typically 10–100 kilometres wide, which slopes very gently (less than one-tenth of a degree) to the shelf edge at a water depth of around 100 metres. Beyond this the continental slope plunges to the deep-ocean floor. The slope is from tens to a few hundred kilometres wide and with a mostly gentle gradient of 3–8 degrees, but locally steeper where it is affected by faulting. The base of slope abuts the abyssal plain — flat, almost featureless expanses between 4 and 6 kilometres deep. The oceans are compartmentalized into abyssal basins separated by submarine mountain ranges and plateaus, which are the result of submarine volcanic outpourings. Those parts of the Earth that are formed of ocean crust are relatively lower, because they are made up of denser rocks — basalts. Those formed of less dense rocks (granites) of the continental crust are relatively higher. Seawater fills in the deeper parts, the ocean basins, to an average depth of around 4 kilometres. In fact, some parts are shallower because the ocean crust is new and still warm — these are the mid-ocean ridges at around 2.5 kilometres — whereas older, cooler crust drags the seafloor down to a depth of over 6 kilometres. […] The seafloor is almost entirely covered with sediment. In places, such as on the flanks of mid-ocean ridges, it is no more than a thin veneer. Elsewhere, along stable continental margins or beneath major deltas where deposition has persisted for millions of years, the accumulated thickness can exceed 15 kilometres. These areas are known as sedimentary basins“.

“The super-efficiency of water as a solvent is due to an asymmetrical bonding between hydrogen and oxygen atoms. The resultant water molecule has an angular or kinked shape with weakly charged positive and negative ends, rather like magnetic poles. This polar structure is especially significant when water comes into contact with substances whose elements are held together by the attraction of opposite electrical charges. Such ionic bonding is typical of many salts, such as sodium chloride (common salt) in which a positive sodium ion is attracted to a negative chloride ion. Water molecules infiltrate the solid compound, the positive hydrogen end being attracted to the chloride and the negative oxygen end to the sodium, surrounding and then isolating the individual ions, thereby disaggregating the solid [I should mention that if you’re interested in knowing (much) more this topic, and closely related topics, this book covers these things in great detail – US]. An apparently simple process, but extremely effective. […] Water is a super-solvent, absorbing gases from the atmosphere and extracting salts from the land. About 3 billion tonnes of dissolved chemicals are delivered by rivers to the oceans each year, yet their concentration in seawater has remained much the same for at least several hundreds of millions of years. Some elements remain in seawater for 100 million years, others for only a few hundred, but all are eventually cycled through the rocks. The oceans act as a chemical filter and buffer for planet Earth, control the distribution of temperature, and moderate climate. Inestimable numbers of calories of heat energy are transferred every second from the equator to the poles in ocean currents. But, the ocean configuration also insulates Antarctica and allows the build-up of over 4000 metres of ice and snow above the South Pole. […] Over many aeons, the oceans slowly accumulated dissolved chemical ions (and complex ions) of almost every element present in the crust and atmosphere. Outgassing from the mantle from volcanoes and vents along the mid-ocean ridges contributed a variety of other elements […] The composition of the first seas was mostly one of freshwater together with some dissolved gases. Today, however, the world ocean contains over 5 trillion tonnes of dissolved salts, and nearly 100 different chemical elements […] If the oceans’ water evaporated completely, the dried residue of salts would be equivalent to a 45-metre-thick layer over the entire planet.”

“The average time a single molecule of water remains in any one reservoir varies enormously. It may survive only one night as dew, up to a week in the atmosphere or as part of an organism, two weeks in rivers, and up to a year or more in soils and wetlands. Residence times in the oceans are generally over 4000 years, and water may remain in ice caps for tens of thousands of years. Although the ocean appears to be in a steady state, in which both the relative proportion and amounts of dissolved elements per unit volume are nearly constant, this is achieved by a process of chemical cycles and sinks. The input of elements from mantle outgassing and continental runoff must be exactly balanced by their removal from the oceans into temporary or permanent sinks. The principal sink is the sediment and the principal agent removing ions from solution is biological. […] The residence times of different elements vary enormously from tens of millions of years for chloride and sodium, to a few hundred years only for manganese, aluminium, and iron. […] individual water molecules have cycled through the atmosphere (or mantle) and returned to the seas more than a million times since the world ocean formed.”

“Because of its polar structure and hydrogen bonding between individual molecules, water has both a high capacity for storing large amounts of heat and one of the highest specific heat values of all known substances. This means that water can absorb (or release) large amounts of heat energy while changing relatively little in temperature. Beach sand, by contrast, has a specific heat five times lower than water, which explains why, on sunny days, beaches soon become too hot to stand on with bare feet while the sea remains pleasantly cool. Solar radiation is the dominant source of heat energy for the ocean and for the Earth as a whole. The differential in solar input with latitude is the main driver for atmospheric winds and ocean currents. Both winds and especially currents are the prime means of mitigating the polar–tropical heat imbalance, so that the polar oceans do not freeze solid, nor the equatorial oceans gently simmer. For example, the Gulf Stream transports some 550 trillion calories from the Caribbean Sea across the North Atlantic each second, and so moderates the climate of north-western Europe.”

“[W]hy is [the sea] mostly blue? The sunlight incident on the sea has a full spectrum of wavelengths, including the rainbow of colours that make up the visible spectrum […] The longer wavelengths (red) and very short (ultraviolet) are preferentially absorbed by water, rapidly leaving near-monochromatic blue light to penetrate furthest before it too is absorbed. The dominant hue that is backscattered, therefore, is blue. In coastal waters, suspended sediment and dissolved organic debris absorb additional short wavelengths (blue) resulting in a greener hue. […] The speed of sound in seawater is about 1500 metres per second, almost five times that in air. It is even faster where the water is denser, warmer, or more salty and shows a slow but steady increase with depth (related to increasing water pressure).”

“From top to bottom, the ocean is organized into layers, in which the physical and chemical properties of the ocean – salinity, temperature, density, and light penetration – show strong vertical segregation. […] Almost all properties of the ocean vary in some way with depth. Light penetration is attenuated by absorption and scattering, giving an upper photic and lower aphotic zone, with a more or less well-defined twilight region in between. Absorption of incoming solar energy also preferentially heats the surface waters, although with marked variations between latitudes and seasons. This results in a warm surface layer, a transition layer (the thermocline) through which the temperature decreases rapidly with depth, and a cold deep homogeneous zone reaching to the ocean floor. Exactly the same broad three-fold layering is true for salinity, except that salinity increases with depth — through the halocline. The density of seawater is controlled by its temperature, salinity, and pressure, such that colder, saltier, and deeper waters are all more dense. A rapid density change, known as the pycnocline, is therefore found at approximately the same depth as the thermocline and halocline. This varies from about 10 to 500 metres, and is often completely absent at the highest latitudes. Winds and waves thoroughly stir and mix the upper layers of the ocean, even destroying the layered structure during major storms, but barely touch the more stable, deep waters.”

Links:

Arvid Pardo. Law of the Sea Convention.
Polynesians.
Ocean exploration timeline (a different timeline is presented in the book, but there’s some overlap). Age of Discovery. Vasco da Gama. Christopher Columbus. John Cabot. Amerigo Vespucci. Ferdinand Magellan. Luigi Marsigli. James Cook.
HMS Beagle. HMS Challenger. Challenger expedition.
Deep Sea Drilling Project. Integrated Ocean Drilling Program. Joides resolution.
World Ocean.
Geological history of Earth (this article of course covers much more than is covered in the book, but the book does cover some highlights). Plate tectonics. Lithosphere. Asthenosphere. Convection. Global mid-ocean ridge system.
Pillow lava. Hydrothermal vent. Hot spring.
Ophiolite.
Mohorovičić discontinuity.
Mid-Atlantic Ridge. Subduction zone. Ring of Fire.
Pluton. Nappe. Mélange. Transform fault. Strike-slip fault. San Andreas fault.
Paleoceanography. Tethys Ocean. Laurasia. Gondwana.
Oceanic anoxic event. Black shale.
Seabed.
Bengal Fan.
Fracture zone.
Seamount.
Terrigenous sediment. Biogenic and chemogenic sediment. Halite. Gypsum.
Carbonate compensation depth.
Laurentian fan.
Deep-water sediment waves. Submarine landslide. Turbidity current.
Water cycle.
Ocean acidification.
Timing and Climatic Consequences ofthe Opening of Drake Passage. The Opening of the Tasmanian Gateway Drove Global Cenozoic Paleoclimatic and Paleoceanographic Changes (report)Antarctic Circumpolar Current.
SOFAR channel.
Bathymetry.

June 18, 2018 Posted by | Books, Chemistry, Geology, Papers, Physics | Leave a comment

Structural engineering

“The purpose of the book is three-fold. First, I aim to help the general reader appreciate the nature of structure, the role of the structural engineer in man-made structures, and understand better the relationship between architecture and engineering. Second, I provide an overview of how structures work: how they stand up to the various demands made of them. Third, I give students and prospective students in engineering, architecture, and science access to perspectives and qualitative understanding of advanced modern structures — going well beyond the simple statics of most introductory texts. […] Structural engineering is an important part of almost all undergraduate courses in engineering. This book is novel in the use of ‘thought-experiments’ as a straightforward way of explaining some of the important concepts that students often find the most difficult. These include virtual work, strain energy, and maximum and minimum energy principles, all of which are basic to modern computational techniques. The focus is on gaining understanding without the distraction of mathematical detail. The book is therefore particularly relevant for students of civil, mechanical, aeronautical, and aerospace engineering but, of course, it does not cover all of the theoretical detail necessary for completing such courses.”

The above quote is from the book‘s preface. I gave the book 2 stars on goodreads, and I must say that I think David Muir Wood’s book in this series on a similar and closely overlapping topic, civil engineering, was just a significantly better book – if you’re planning on reading only one book on these topics, in my opinion you should pick Wood’s book. I have two main complaints against this book: There’s too much stuff about the aesthetic properties of structures, and the history- and development of the differences between architecture and engineering; and the author seems to think it’s no problem covering quite complicated topics with just analogies and thought experiments, without showing you any of the equations. As for the first point, I don’t really have any interest in aesthetics or architectural history; as for the second, I can handle math reasonably well, but I usually have trouble when people insist on hiding the equations from me and talking only ‘in images’. The absence of equations doesn’t mean the topic coverage is dumbed-down, much; it’s rather the case that the author is trying to cover the sort of material that we usually use mathematics to talk about, because this is the most efficient language to use, using different kinds of language; the problem is that things get lost in the translation. He got rid of the math, but not the complexity. The book does include many illustrations as well, including illustrations of some quite complicated topics and dynamics, but some of the things he talks about in the book are things you can’t illustrate well with images because you ‘run out of dimensions’ before you’ve handled all the relevant aspects/dynamics, an admission he himself makes in the book.

Anyway, the book is not terrible and there’s some interesting stuff in there. I’ve added a few more quotes and some links related to the book’s coverage below.

“All structures span a gap or a space of some kind and their primary role is to transmit the imposed forces safely. A bridge spans an obstruction like a road or a river. The roof truss of a house spans the rooms of the house. The fuselage of a jumbo jet spans between wheels of its undercarriage on the tarmac of an airport terminal and the self-weight, lift and drag forces in flight. The hull of a ship spans between the variable buoyancy forces caused by the waves of the sea. To be fit for purpose every structure has to cope with specific local conditions and perform inside acceptable boundaries of behaviour—which engineers call ‘limit states’. […] Safety is paramount in two ways. First, the risk of a structure totally losing its structural integrity must be very low—for example a building must not collapse or a ship break up. This maximum level of performance is called an ultimate limit state. If a structure should reach that state for whatever reason then the structural engineer tries to ensure that the collapse or break up is not sudden—that there is some degree of warning—but this is not always possible […] Second, structures must be able to do what they were built for—this is called serviceability or performance limit state. So for example a skyscraper building must not sway so much that it causes discomfort to the occupants, even if the risk of total collapse is still very small.”

“At its simplest force is a pull (tension) or a push (compression). […] There are three ways in which materials are strong in different combinations—pulling (tension), pushing (compression), and sliding (shear). Each is very important […] all intact structures have internal forces that balance the external forces acting on them. These external forces come from simple self-weight, people standing, sitting, walking, travelling across them in cars, trucks, and trains, and from the environment such as wind, water, and earthquakes. In that state of equilibrium it turns out that structures are naturally lazy—the energy stored in them is a minimum for that shape or form of structure. Form-finding structures are a special group of buildings that are allowed to find their own shape—subject to certain constraints. There are two classes—in the first, the form-finding process occurs in a model (which may be physical or theoretical) and the structure is scaled up from the model. In the second, the structure is actually built and then allowed to settle into shape. In both cases the structures are self-adjusting in that they move to a position in which the internal forces are in equilibrium and contain minimum energy. […] there is a big problem in using self-adjusting structures in practice. The movements under changing loads can make the structures unfit for purpose. […] Triangles are important in structural engineering because they are the simplest stable form of structure and you see them in all kinds of structures—whether form-finding or not. […] Other forms of pin jointed structure, such as a rectangle, will deform in shear as a mechanism […] unless it has diagonal bracing—making it triangular. […] bending occurs in part of a structure when the forces acting on it tend to make it turn or rotate—but it is constrained or prevented from turning freely by the way it is connected to the rest of the structure or to its foundations. The turning forces may be internal or external.”

“Energy is the capacity of a force to do work. If you stretch an elastic band it has an internal tension force resisting your pull. If you let go of one end the band will recoil and could inflict a sharp sting on your other hand. The internal force has energy or the capacity to do work because you stretched it. Before you let go the energy was potential; after you let go the energy became kinetic. Potential energy is the capacity to do work because of the position of something—in this case because you pulled the two ends of the band apart. […] A car at the top of a hill has the potential energy to roll down the hill if the brakes are released. The potential energy in the elastic band and in a structure has a specific name—it is called ‘strain energy’. Kinetic energy is due to movement, so when you let go of the band […] the potential energy is converted into kinetic energy. Kinetic energy depends on mass and velocity—so a truck can develop more kinetic energy than a small car. When a structure is loaded by a force then the structure moves in whatever way it can to ‘get out of the way’. If it can move freely it will do—just as if you push a car with the handbrake off it will roll forward. However, if the handbrake is on the car will not move, and an internal force will be set up between the point at which you are pushing and the wheels as they grip the road.”

“[A] rope hanging freely as a catenary has minimum energy and […] it can only resist one kind of force—tension. Engineers say that it has one degree of freedom. […] In brief, degrees of freedom are the independent directions in which a structure or any part of a structure can move or deform […] Movements along degrees of freedom define the shape and location of any object at a given time. Each part, each piece of a physical structure whatever its size is a physical object embedded in and connected to other objects […] similar objects which I will call its neighbours. Whatever its size each has the potential to move unless something stops it. Where it may move freely […] then no internal resisting force is created. […] where it is prevented from moving in any direction a reaction force is created with consequential internal forces in the structure. For example at a support to a bridge, where the whole bridge is normally stopped from moving vertically, then an external vertical reaction force develops which must be resisted by a set of internal forces that will depend on the form of the bridge. So inside the bridge structure each piece, however small or large, will move—but not freely. The neighbouring objects will get in the way […]. When this happens internal forces are created as the objects bump up against each other and we represent or model those forces along the pathways which are the local degrees of freedom. The structure has to be strong enough to resist these internal forces along these pathways.”

“The next question is ‘How do we find out how big the forces and movements are?’ It turns out that there is a whole class of structures where this is reasonably straightforward and these are the structures covered in elementary textbooks. Engineers call them ‘statically determinate’ […] For these structures we can find the sizes of the forces just by balancing the internal and external forces to establish equilibrium. […] Unfortunately many real structures can’t be fully explained in this way—they are ‘statically indeterminate‘. This is because whilst establishing equilibrium between internal and external forces is necessary it is not sufficient for finding all of the internal forces. […] The four-legged stool is statically indeterminate. You will begin to understand this if you have ever sat at a fourlegged wobbly table […] which has one leg shorter than the other three legs. There can be no force in that leg because there is no reaction from the ground. What is more, the opposite leg will have no internal force either because otherwise there would be a net turning moment about the line joining the other two legs. Thus the table is balanced on two legs—which is why it wobbles back and forth. […] each leg has one degree of freedom but we have only three ways of balancing them in the (x,y,z) directions. In mathematical terms, we have four unknown  variables (the internal forces) but only three equations (balancing equilibrium in three directions). It follows that there isn’t just one set of forces in equilibrium—indeed, there are many such sets.”

“[W]hen a structure is in equilibrium it has minimum strain energy. […] Strictly speaking, minimum strain energy as a criterion for equilibrium is [however] true only in specific circumstances. To understand this we need to look at the constitutive relations between forces and deformations or displacements. Strain energy is stored potential energy and that energy is the capacity to do work. The strain energy in a body is there because work has been done on it—a force moved through a distance. Hence in order to know the energy we must know how much displacement is caused by a given force. This is called a ‘constitutive relation’ and has the form ‘force equals a constitutive factor times a displacement’. The most common of these relationships is called ‘linear elastic’ where the force equals a simple numerical factor—called the stiffness—times the displacement […] The inverse of the stiffness is called flexibility”.

“Aeroplanes take off or ascend because the lift forces due to the forward motion of the plane exceed the weight […] In level flight or cruise the plane is neutrally buoyant and flies at a steady altitude. […] The structure of an aircraft consists of four sets of tubes: the fuselage, the wings, the tail, and the fin. For obvious reasons their weight needs to be as small as possible. […] Modern aircraft structures are semi-monocoque—meaning stressed skin but with a supporting frame. In other words the skin covering, which may be only a few millimetres thick, becomes part of the structure. […] In an overall sense, the lift and drag forces effectively act on the wings through centres of pressure. The wings also carry the weight of engines and fuel. During a typical flight, the positions of these centres of force vary along the wing—for example as fuel is used. The wings are balanced cantilevers fixed to the fuselage. Longer wings (compared to their width) produce greater lift but are also necessarily heavier—so a compromise is required.”

“When structures move quickly, in particular if they accelerate or decelerate, we have to consider […] the inertia force and the damping force. They occur, for example, as an aeroplane takes off and picks up speed. They occur in bridges and buildings that oscillate in the wind. As these structures move the various bits of the structure remain attached—perhaps vibrating in very complex patterns, but they remain joined together in a state of dynamic equilibrium. An inertia force results from an acceleration or deceleration of an object and is directly proportional to the weight of that object. […] Newton’s 2nd Law tells us that the magnitudes of these [inertial] forces are proportional to the rates of change of momentum. […] Damping arises from friction or ‘looseness’ between components. As a consequence, energy is dissipated into other forms such as heat and sound, and the vibrations get smaller. […] The kinetic energy of a structure in static equilibrium is zero, but as the structure moves its potential energy is converted into kinetic energy. This is because the total energy remains constant by the principle of the conservation of energy (the first law of thermodynamics). The changing forces and displacements along the degree of freedom pathways travel as a wave […]. The amplitude of the wave depends on the nature of the material and the connections between components.”

“For [a] structure to be safe the materials must be strong enough to resist the tension, the compression, and the shear. The strength of materials in tension is reasonably straightforward. We just need to know the limiting forces the material can resist. This is usually specified as a set of stresses. A stress is a force divided by a cross sectional area and represents a localized force over a small area of the material. Typical limiting tensile stresses are called the yield stress […] and the rupture stress—so we just need to know their numerical values from tests. Yield occurs when the material cannot regain its original state, and permanent displacements or strains occur. Rupture is when the material breaks or fractures. […] Limiting average shear stresses and maximum allowable stress are known for various materials. […] Strength in compression is much more difficult […] Modern practice using the finite element method enables us to make theoretical estimates […] but it is still approximate because of the simplifications necessary to do the computer analysis […]. One of the challenges to engineers who rely on finite element analysis is to make sure they understand the implications of the simplifications used.”

“Dynamic loads cause vibrations. One particularly dangerous form of vibration is called resonance […]. All structures have a natural frequency of free vibration. […] Resonance occurs if the frequency of an external vibrating force coincides with the natural frequency of the structure. The consequence is a rapid build up of vibrations that can become seriously damaging. […] Wind is a major source of vibrations. As it flows around a bluff body the air breaks away from the surface and moves in a circular motion like a whirlpool or whirlwind as eddies or vortices. Under certain conditions these vortices may break away on alternate sides, and as they are shed from the body they create pressure differences that cause the body to oscillate. […] a structure is in stable equilibrium when a small perturbation does not result in large displacements. A structure in dynamic equilibrium may oscillate about a stable equilibrium position. […] Flutter is dynamic and a form of wind-excited self-reinforcing oscillation. It occurs, as in the P-delta effect, because of changes in geometry. Forces that are no longer in line because of large displacements tend to modify those displacements of the structure, and these, in turn, modify the forces, and so on. In this process the energy input during a cycle of vibration may be greater than that lost by damping and so the amplitude increases in each cycle until destruction. It is a positive feed-back mechanism that amplifies the initial deformations, causes non-linearity, material plasticity and decreased stiffness, and reduced natural frequency. […] Regular pulsating loads, even very small ones, can cause other problems too through a phenomenon known as fatigue. The word is descriptive—under certain conditions the materials just get tired and crack. A normally ductile material like steel becomes brittle. Fatigue occurs under very small loads repeated many millions of times. All materials in all types of structures have a fatigue limit. […] Fatigue damage occurs deep in the material as microscopic bonds are broken. The problem is particularly acute in the heat affected zones of welded structures.”

“Resilience is the ability of a system to recover quickly from difficult conditions. […] One way of delivering a degree of resilience is to make a structure fail-safe—to mitigate failure if it happens. A household electrical fuse is an everyday example. The fuse does not prevent failure, but it does prevent extreme consequences such as an electrical fire. Damage-tolerance is a similar concept. Damage is any physical harm that reduces the value of something. A damage-tolerant structure is one in which any damage can be accommodated at least for a short time until it can be dealt with. […] human factors in failure are not just a matter of individuals’ slips, lapses, or mistakes but are also the result of organizational and cultural situations which are not easy to identify in advance or even at the time. Indeed, they may only become apparent in hindsight. It follows that another major part of safety is to design a structure so that it can be inspected, repaired, and maintained. Indeed all of the processes of creating a structure, whether conceiving, designing, making, or monitoring performance, have to be designed with sufficient resilience to accommodate unexpected events. In other words, safety is not something a system has (a property), rather it is something a system does (a performance). Providing resilience is a form of control—a way of managing uncertainties and risks.”

Stiffness.
Antoni Gaudí. Heinz Isler. Frei Otto.
Eden Project.
Tensegrity.
Bending moment.
Shear and moment diagram.
Stonehenge.
Pyramid at Meidum.
Vitruvius.
Master builder.
John Smeaton.
Puddling (metallurgy).
Cast iron.
Isambard Kingdom Brunel.
Henry Bessemer. Bessemer process.
Institution of Structural Engineers.
Graphic statics (wiki doesn’t have an article on this topic under this name and there isn’t much here, but it looks like google has a lot if you’re interested).
Constitutive equation.
Deformation (mechanics).
Compatibility (mechanics).
Principle of Minimum Complementary Energy.
Direct stiffness method. Finite element method.
Hogging and sagging.
Centre of buoyancy. Metacentre (fluid mechanics). Angle of attack.
Box girder bridge.
D’Alembert’s principle.
Longeron.
Buckling.
S-n diagram.

April 11, 2018 Posted by | Books, Engineering, Physics | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Molecular biology, Pharmacology, Physics | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”

Links:

Limnology.
Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Plate Tectonics (II)

Some more observations and links below.

I may or may not add a third post about the book at a later point in time; there’s a lot of interesting stuff included in this book.

“Because of the thickness of the lithosphere, its bending causes […] a stretching of its upper surface. This stretching of the upper portion of the lithosphere manifests itself as earthquakes and normal faulting, the style of faulting that occurs when a region extends horizontally […]. Such earthquakes commonly occur after great earthquakes […] Having been bent down at the trench, the lithosphere […] slides beneath the overriding lithospheric plate. Fault plane solutions of shallow focus earthquakes […] provide the most direct evidence for this underthrusting. […] In great earthquakes, […] the deformation of the surface of the Earth that occurs during such earthquakes corroborates the evidence for underthrusting of the oceanic lithosphere beneath the landward side of the trench. The 1964 Alaskan earthquake provided the first clear example. […] Because the lithosphere is much colder than the asthenosphere, when a plate of lithosphere plunges into the asthenosphere at rates of tens to more than a hundred millimetres per year, it remains colder than the asthenosphere for tens of millions of years. In the asthenosphere, temperatures approach those at which some minerals in the rock can melt. Because seismic waves travel more slowly and attenuate (lose energy) more rapidly in hot, and especially in partially molten, rock than they do in colder rock, the asthenosphere is not only a zone of weakness, but also characterized by low speeds and high attenuation of seismic waves. […] many seismologists use the waves sent by earthquakes to study the Earth’s interior, with little regard for earthquakes themselves. The speeds at which these waves propagate and the rate at which the waves die out, or attenuate, have provided much of the data used to infer the Earth’s internal structure.”

S waves especially, but also P waves, lose much of their energy while passing through the asthenosphere. The lithosphere, however, transmits P and S waves with only modest loss of energy. This difference is apparent in the extent to which small earthquakes can be felt. In regions like the western United States or in Greece and Italy, the lithosphere is thin, and the asthenosphere reaches up to shallow depths. As a result earthquakes, especially small ones, are felt over relatively small areas. By contrast, in the eastern United States or in Eastern Europe, small earthquakes can be felt at large distances. […] Deep earthquakes occur several hundred kilometres west of Japan, but they are felt with greater intensity and can be more destructive in eastern than western Japan […]. This observation, of course, puzzled Japanese seismologists when they first discovered deep focus earthquakes; usually people close to the epicentre (the point directly over the earthquake) feel stronger shaking than people farther from it. […] Tokuji Utsu […] explained this greater intensity of shaking along the more distant, eastern side of the islands than on the closer, western side by appealing to a window of low attenuation parallel to the earthquake zone and plunging through the asthenosphere beneath Japan and the Sea of Japan to its west. Paths to eastern Japan travelled efficiently through that window, the subducted slab of lithosphere, whereas those to western Japan passed through the asthenosphere and were attenuated strongly.”

“Shallow earthquakes occur because stress on a fault surface exceeds the resistance to slip that friction imposes. When two objects are forced to slide past one another, and friction opposes the force that pushes one past the other, the frictional resistance can be increased by pressing the two objects together more forcefully. Many of us experience this when we put sandbags in the trunks […] of our cars in winter to give the tyres greater traction on slippery roads. The same applies to faults in the Earth’s crust. As the pressure increases with increasing depth in the Earth, frictional resistance to slip on faults should increase. For depths greater than a few tens of kilometres, the high pressure should press the two sides of a fault together so tightly that slip cannot occur. Thus, in theory, deep-focus earthquakes ought not to occur.”

“In general, rock […] is brittle at low temperatures but becomes soft and flows at high temperature. The intermediate- and deep-focus earthquakes occur within the lithosphere, where at a given depth, the temperature is atypically low. […] the existence of intermediate- or deep-focus earthquakes is usually cited as evidence for atypically cold material at asthenospheric depths. Most such earthquakes, therefore, occur in oceanic lithosphere that has been subducted within the last 10–20 million years, sufficiently recently that it has not heated up enough to become soft and weak […]. The inference that the intermediate- and deep-focus earthquakes occur within the lithosphere and not along its top edge remains poorly appreciated among Earth scientists. […] the fault plane solutions suggest that the state of stress in the downgoing slab is what one would expect if the slab deformed like a board, or slab of wood. Accordingly, we infer that the earthquakes occurring within the downgoing slab of lithosphere result from stress within the slab, not from movement of the slab past the surrounding asthenosphere. Because the lithosphere is much stronger than the surrounding asthenosphere, it can support much higher stresses than the asthenosphere can. […] observations are consistent with a cold, heavy slab sinking into the asthenosphere and being pulled downward by gravity acting on it, but then encountering resistance at depths of 500–700 km despite the pull of gravity acting on the excess mass of the slab. Where both intermediate and deep-focus earthquakes occur, a gap, or a minimum, in earthquake activity near a depth of 300 km marks the transition between the upper part of the slab stretched by gravity pulling it down and the lower part where the weight of the slab above it compresses it. In the transition region between them, there would be negligible stress and, therefore, no or few earthquakes.”

“Volcanoes occur where rock melts, and where that molten rock can rise to the surface. […] For essentially all minerals […] melting temperatures […] depend on the extent to which the minerals have been contaminated by impurities. […] hydrogen, when it enters most crystal lattices, lowers the melting temperature of the mineral. Hydrogen is most obviously present in water (H2O), but is hardly a major constituent of the oxygen-, silicon-, magnesium-, and iron-rich mantle. The top of the downgoing slab of lithosphere includes fractured crust and sediment deposited atop it. Oceanic crust has been stewing in seawater for tens of millions of years, so that its cracks have become full either of liquid water or of minerals to which water molecules have become loosely bound. […] the downgoing slab acts like a caravan of camels carrying water downward into an upper mantle desert. […] The downgoing slab of lithosphere carries water in cracks in oceanic crust and in the interstices among sediment grains, and when released to the mantle above it, hydrogen dissolved in crystal lattices lowers the melting temperature of that rock enough that some of it melts. Many of the world’s great volcanoes […] begin as small amounts of melt above the subducted slabs of lithosphere.”

“… (in most regions) plates of lithosphere behave as rigid, and therefore undeformable, objects. The high strength of intact lithosphere, stronger than either the asthenosphere below it or the material along the boundaries of plates, allows the lithospheric plates to move with respect to one another without deforming (much). […] The essence of ‘plate tectonics’ is that vast regions move with respect to one another as (nearly) rigid objects. […] Dan McKenzie of Cambridge University, one of the scientists to present the idea of rigid plates, often argued that plate tectonics was easy to accept because the kinematics, the description of relative movements of plates, could be separated from the dynamics, the system of forces that causes plates to move with respect to one another in the directions and at the speeds that they do. Making such a separation is impossible for the flow of most fluids, […] whose movement cannot be predicted without an understanding of the forces acting on separate parcels of fluid. In part because of its simplicity, plate tectonics passed from being a hypothesis to an accepted theory in a short time.”

“[F]or plates that move over the surface of a sphere, all relative motion can be described simply as a rotation about an axis that passes through the centre of the sphere. The Earth itself obviously rotates around an axis through the North and South Poles. Similarly, the relative displacement of two plates with respect to one another can be described as a rotation of one plate with respect to the other about an axis, or ‘pole’, of rotation […] if we know how two plates, for example Eurasia and Africa, move with respect to a third plate, like North America, we can calculate how those two plates (Eurasia and Africa) move with respect to each other. A rotation about an axis in the Arctic Ocean describes the movement of the Africa plate, with respect to the North America plate […]. Combining the relative motion of Africa with respect to North America with the relative motion of North America with respect to Eurasia allows us to calculate that the African continent moves toward Eurasia by a rotation about an axis that lies west of northern Africa. […] By combining the known relative motion of pairs of plates […] we can calculate how fast plates converge with respect to one another and in what direction.”

“[W]e can measure how plates move with respect to one another using Global Positioning System (GPS) measurements of points on nearly all of the plates. Such measurements show that speeds of relative motion between some pairs of plates have changed a little bit since 2 million years ago, but in general, the GPS measurements corroborate the inferences drawn both from rates of seafloor spreading determined using magnetic anomalies and from directions of relative plate motion determined using orientations of transform faults and fault plane solutions of earthquakes. […] Among tests of plate tectonics, none is more convincing than the GPS measurements […] numerous predictions of rates or directions of present-day plate motions and of large displacements of huge terrains have been confirmed many times over. […] When, more than 45 years ago, plate tectonics was proposed to describe relative motions of vast terrains, most saw it as an approximation that worked well, but that surely was imperfect. […] plate tectonics is imperfect, but GPS measurements show that the plates are surprisingly rigid. […] Long histories of plate motion can be reduced to relatively few numbers, the latitudes and longitudes of the poles of rotation, and the rates or amounts of rotation about those axes.”

Links:

Wadati–Benioff zone.
Translation (geometry).
Rotation (mathematics).
Poles of rotation.
Rotation around a fixed axis.
Euler’s rotation theorem.
Isochron dating.
Tanya Atwater.

December 25, 2017 Posted by | Books, Chemistry, Geology, Physics | Leave a comment

Plate Tectonics (I)

Some quotes and links related to the first half of the book‘s coverage:

“The fundamental principle of plate tectonics is that large expanses of terrain, thousands of kilometres in lateral extent, behave as thin (~100 km in thickness) rigid layers that move with respect to each another across the surface of the Earth. The word ‘plate’ carries the image of a thin rigid object, and ‘tectonics’ is a geological term that refers to large-scale processes that alter the structure of the Earth’s crust. […] The Earth is stratified with a light crust overlying denser mantle. Just as the height of icebergs depends on the mass of ice below the surface of the ocean, so […] the light crust of the Earth floats on the denser mantle, standing high where crust is thick, and lying low, deep below the ocean, where it should be thin. Wegener recognized that oceans are mostly deep, and he surmised correctly that the crust beneath oceans must be much thinner than that beneath continents.”

“From a measurement of the direction in which a hunk of rock is magnetized, one can infer where the North Pole lay relative to that rock at the time it was magnetized. It follows that if continents had drifted, rock of different ages on the continents should be magnetized in different directions, not just from each other but more importantly in directions inconsistent with the present-day magnetic field. […] In the 1950s, several studies using palaeomagnetism were carried out to test whether continents had drifted, and most such tests passed. […] Palaeomagnetic results not only supported the idea of continental drift, but they also offered constraints on timing and rates of drift […] in the 1960s, the idea of continental drift saw a renaissance, but subsumed within a broader framework, that of plate tectonics.”

“If one wants to study deformation of the Earth’s crust in action, the quick and dirty way is to study earthquakes. […] Until the 1960s, studying fracture zones in action was virtually impossible. Nearly all of them lie far offshore beneath the deep ocean. Then, in response to a treaty in the early 1960s disallowing nuclear explosions in the ocean, atmosphere, or space, but permitting underground testing of them, the Department of Defense of the USA put in place the World-Wide Standardized Seismograph Network, a global network with more than 100 seismograph stations. […] Suddenly remote earthquakes, not only those on fracture zones but also those elsewhere throughout the globe […], became amenable to study. […] the study of earthquakes played a crucial role in the recognition and acceptance of plate tectonics. […] By the early 1970s, the basic elements of plate tectonics had permeated essentially all of Earth science. In addition to the obvious consequences, like confirmation of continental drift, emphasis shifted from determining the history of the planet to understanding the processes that had shaped it.”

“[M]ost solids are strongest when cold, and become weaker when warmed. Temperature increases into the Earth. As a result the strongest rock lies close to the surface, and rock weakens with depth. Moreover, olivine, the dominant mineral in the upper mantle, seems to be stronger than most crustal minerals; so, in many regions, the strongest rock is at the top of the mantle. Beneath oceans where crust is thin, ~7 km, the lithosphere is mostly mantle […]. Because temperature increases gradually with depth, the boundary between strong lithosphere and underlying weak asthenosphere is not sharp. Nevertheless, because the difference in strength is large, subdividing the outer part of the Earth into two layers facilitates an understanding of plate tectonics. Reduced to its essence, the basic idea that we call plate tectonics is simply a description of the relative movements of separate plates of lithosphere as these plates move over the underlying weaker, hotter asthenosphere. […] Most of the Earth’s surface lies on one of the ~20 major plates, whose sizes vary from huge, like the Pacific plate, to small, like the Caribbean plate […], or even smaller. Narrow belts of earthquakes mark the boundaries of separate plates […]. The key to plate tectonics lies in these plates behaving as largely rigid objects, and therefore undergoing only negligible deformation.”

“Although the amounts and types of sediment deposited on the ocean bottom vary from place to place, the composition and structure of the oceanic crust is remarkably uniform beneath the deep ocean. The structure of oceanic lithosphere depends primarily on its age […] As the lithosphere ages, it thickens, and the rate at which it cools decreases. […] the rate that heat is lost through the seafloor decreases with the age of lithosphere. […] As the lithospheric plate loses heat and cools, like most solids, it contracts. This contraction manifests itself as a deepening of the ocean. […] Seafloor spreading in the Pacific occurs two to five times faster than it does in the Atlantic. […] when seafloor spreading is slow, new basalt rising to the surface at the ridge axis can freeze onto the older seafloor on its edges before rising as high as it would otherwise. As a result, a valley […] forms. Where spreading is faster, however, as in the Pacific, new basalt rises to a shallower depth and no such valley forms. […] The spreading apart of two plates along a mid-ocean ridge system occurs by divergence of the two plates along straight segments of mid-ocean ridge that are truncated at fracture zones. Thus, the plate boundary at a mid-ocean ridge has a zig-zag shape, with spreading centres making zigs and transform faults making zags along it.”

“Geochemists are confident that the volume of water in the oceans has not changed by a measurable amount for hundreds of millions, if not billions, of years. Yet, the geologic record shows several periods when continents were flooded to a much greater extent than today. For example, 90 million years ago, the Midwestern United States and neighbouring Canada were flooded. One could have sailed due north from the Gulf of Mexico to Hudson’s Bay and into the Arctic. […] If sea level has risen and fallen, while the volume of water has remained unchanged, then the volume of the basin holding the water must have changed. The rates at which seafloor is created at the different spreading centres today are not the same, and such rates at all spreading centres have varied over geologic time. Imagine a time in the past when seafloor at some of the spreading centres was created at a faster rate than it is today. If this relatively high rate had continued for a few tens of millions of years, there would have been more young ocean floor than today, and correspondingly less old floor […]. Thus, the average depth of the ocean would be shallower than it is today, and the volume of the ocean basin would be smaller than today. Water should have spilled onto the continent. Most now attribute the high sea level in the Cretaceous Period (145 to 65 million years ago) to unusually rapid creation of seafloor, and hence to a state when seafloor was younger on average than today.”

Wilson focused on the two major differences between ordinary strike-slip faults, or transcurrent faults, and transform faults on fracture zones. (1) If transcurrent faulting occurred, slip should occur along the entire fracture zone; but for transform faulting, only the portion between the segments of spreading centres would be active. (2) The sense of slip on the faults would be opposite for these two cases: if right-lateral for one, then left-lateral for the other […] The occurrences of earthquakes along a fault provide the most convincing evidence that the fault is active. Slip on most faults and most deformation of the Earth’s crust to make mountains occurs not slowly and steadily on human timescales, but abruptly during earthquakes. Accordingly, a map of earthquakes is, to a first approximation, a map of active faults on which regions, such as lithospheric plates, slide past one another […] When an earthquake occurs, slip on a fault takes place. One side of the fault slides past the other so that slip is parallel to the plane of the fault; the opening of cracks, into which cows or people can fall, is rare and atypical. Repeated studies of earthquakes and the surface ruptures accompanying them show that the slip during an earthquake is representative of the sense of cumulative displacement that has occurred on faults over geologic timescales. Thus earthquakes give us snapshots of processes that occur over thousands to millions of years. Two aspects of a fault define it: the orientation of the fault plane, which can be vertical or gently dipping, and the sense of slip: the direction that one side of the fault moves with respect to the other […] To a first approximation, boundaries between plates are single faults. Thus, if we can determine both the orientation of the fault plane and the sense of slip on it during an earthquake, we can infer the direction that one plate moves with respect to the other. Often during earthquakes, but not always, slip on the fault offsets the Earth’s surface, and we can directly observe the sense of motion […]. In the deep ocean, however, this cannot be done as a general practice, and we must rely on more indirect methods.”

“Because seafloor spreading creates new seafloor at the mid-ocean ridges, the newly formed crust must find accommodation: either the Earth must expand or lithosphere must be destroyed at the same rate that it is created. […] for the Earth not to expand (or contract), the sum total of new lithosphere made at spreading centres must be matched by the removal, by subduction, of an equal amount of lithosphere at island arc structures. […] Abundant evidence […] shows that subduction of lithosphere does occur. […] The subduction process […] differs fundamentally from that of seafloor spreading, in that subduction is asymmetric. Whereas two plates are created and grow larger at equal rates at spreading centers (mid-ocean ridges and rises), the areal extent of only one plate decreases at a subduction zone. The reason for this asymmetry derives from the marked dependence of the strength of rock on temperature. […] At spreading centres, hot weak rock deforms easily as it rises at mid-ocean ridges, cools, and then becomes attached to one of the two diverging plates. At subduction zones, however, cold and therefore strong lithosphere resists bending and contortion. […] two plates of lithosphere, each some 100 km thick, cannot simply approach one another, turn sharp corners […], and dive steeply into the asthenosphere. Much less energy is dissipated if one plate undergoes modest flexure and then slides at a gentle angle beneath the other, than if both plates were to undergo pronounced bending and then plunged together steeply into the asthenosphere. Nature takes the easier, energetically more efficient, process. […] Before it plunges beneath the island arc, the subducting plate of lithosphere bends down gently to cause a deep-sea trench […] As the plate bends down to form the trench, the lithosphere seaward of the trench is flexed upwards slightly. […] the outer topographic rise […] will be lower but wider for thicker lithosphere.”

Plate tectonics.
Andrija Mohorovičić. Mohorovičić discontinuity.
Archimedes’ principle.
Isostasy.
Harold Jeffreys. Keith Edward Bullen. Edward A. Irving. Harry Hammond Hess. Henry William Menard. Maurice Ewing.
Paleomagnetism.
Lithosphere. Asthenosphere.
Mid-ocean ridge. Bathymetry. Mid-Atlantic Ridge. East Pacific Rise. Seafloor spreading.
Fracture zone. Strike-slip fault. San Andreas Fault.
World-Wide Standardized Seismograph Network (USGS).
Vine–Matthews–Morley hypothesis.
Geomagnetic reversal. Proton precession magnetometer. Jaramillo (normal) event.
Potassium–argon dating.
Deep Sea Drilling Project.
“McKenzie Equations” for magma migration.
Transform fault.
Mendocino Fracture Zone.
Subduction.
P-wave. S-wave. Fault-plane solution. Compressional waves.
Triple junction.

December 23, 2017 Posted by | Books, Geology, Physics | Leave a comment

The Periodic Table

“After evolving for nearly 150 years through the work of numerous individuals, the periodic table remains at the heart of the study of chemistry. This is mainly because it is of immense practical benefit for making predictions about all manner of chemical and physical properties of the elements and possibilities for bond formation. Instead of having to learn the properties of the 100 or more elements, the modern chemist, or the student of chemistry, can make effective predictions from knowing the properties of typical members of each of the eight main groups and those of the transition metals and rare earth elements.”

I wasn’t very impressed with this book, but it wasn’t terrible. It didn’t include a lot of new stuff I didn’t already know and it focused in my opinion excessively on historical aspects; some of those things were interesting, for example the problems that confronted chemists trying to make sense of how best to categorize chemical elements in the late 19th century before the discovery of the neutron (the number of protons in the nucleus is not the same thing as the atomic weight of an atom – which was highly relevant because: “when it came to deciding upon the most important criterion for classifying the elements, Mendeleev insisted that atomic weight ordering would tolerate no exceptions”), but I’d have liked to learn a lot more about e.g. some of the chemical properties of the subgroups, instead of just revisiting stuff I’d learned earlier in other publications in the series. However I assume people who are new to chemistry – or who have forgot a lot, and would like to rectify this – might feel differently about the book and the way it covers the material included. However I don’t think this is one of the best publications in the physics/chemistry categories of this OUP series.

Some quotes and links below.

“Lavoisier held that an element should be defined as a material substance that has yet to be broken down into any more fundamental components. In 1789, Lavoisier published a list of 33 simple substances, or elements, according to this empirical criterion. […] the discovery of electricity enabled chemists to isolate many of the more reactive elements, which, unlike copper and iron, could not be obtained by heating their ores with charcoal (carbon). There have been a number of major episodes in the history of chemistry when half a dozen or so elements were discovered within a period of a few years. […] Following the discovery of radioactivity and nuclear fission, yet more elements were discovered. […] Today, we recognize about 90 naturally occurring elements. Moreover, an additional 25 or so elements have been artificially synthesized.”

“Chemical analogies between elements in the same group are […] of great interest in the field of medicine. For example, the element beryllium sits at the top of group 2 of the periodic table and above magnesium. Because of the similarity between these two elements, beryllium can replace the element magnesium that is essential to human beings. This behaviour accounts for one of the many ways in which beryllium is toxic to humans. Similarly, the element cadmium lies directly below zinc in the periodic table, with the result that cadmium can replace zinc in many vital enzymes. Similarities can also occur between elements lying in adjacent positions in rows of the periodic table. For example, platinum lies next to gold. It has long been known that an inorganic compound of platinum called cis-platin can cure various forms of cancer. As a result, many drugs have been developed in which gold atoms are made to take the place of platinum, and this has produced some successful new drugs. […] [R]ubidium […] lies directly below potassium in group 1 of the table. […] atoms of rubidium can mimic those of potassium, and so like potassium can easily be absorbed into the human body. This behaviour is exploited in monitoring techniques, since rubidium is attracted to cancers, especially those occurring in the brain.”

“Each horizontal row represents a single period of the table. On crossing a period, one passes from metals such as potassium and calcium on the left, through transition metals such as iron, cobalt, and nickel, then through some semi-metallic elements like germanium, and on to some non-metals such as arsenic, selenium, and bromine, on the right side of the table. In general, there is a smooth gradation in chemical and physical properties as a period is crossed, but exceptions to this general rule abound […] Metals themselves can vary from soft dull solids […] to hard shiny substances […]. Non-metals, on the other hand, tend to be solids or gases, such as carbon and oxygen respectively. In terms of their appearance, it is sometimes difficult to distinguish between solid metals and solid non-metals. […] The periodic trend from metals to non-metals is repeated with each period, so that when the rows are stacked, they form columns, or groups, of similar elements. Elements within a single group tend to share many important physical and chemical properties, although there are many exceptions.”

“There have been quite literally over 1,000 periodic tables published in print […] One of the ways of classifying the periodic tables that have been published is to consider three basic formats. First of all, there are the originally produced short-form tables published by the pioneers of the periodic table like Newlands, Lothar Meyer, and Mendeleev […] These tables essentially crammed all the then known elements into eight vertical columns or groups. […] As more information was gathered on the properties of the elements, and as more elements were discovered, a new kind of arrangement called the medium-long-form table […] began to gain prominence. Today, this form is almost completely ubiquitous. One odd feature is that the main body of the table does not contain all the elements. […] The ‘missing’ elements are grouped together in what looks like a separate footnote that lies below the main table. This act of separating off the rare earth elements, as they have traditionally been called, is performed purely for convenience. If it were not carried out, the periodic table would appear much wider, 32 elements wide to be precise, instead of 18 elements wide. The 32-wide element format does not lend itself readily to being reproduced on the inside cover of chemistry textbooks or on large wall-charts […] if the elements are shown in this expanded form, as they sometimes are, one has the long-form periodic table, which may be said to be more correct than the familiar medium-long form, in the sense that the sequence of elements is unbroken […] there are many forms of the periodic table, some designed for different uses. Whereas a chemist might favour a form that highlights the reactivity of the elements, an electrical engineer might wish to focus on similarities and patterns in electrical conductivities.”

“The periodic law states that after certain regular but varying intervals, the chemical elements show an approximate repetition in their properties. […] This periodic repetition of properties is the essential fact that underlies all aspects of the periodic system. […] The varying length of the periods of elements and the approximate nature of the repetition has caused some chemists to abandon the term ‘law’ in connection with chemical periodicity. Chemical periodicity may not seem as law-like as most laws of physics. […] A modern periodic table is much more than a collection of groups of elements showing similar chemical properties. In addition to what may be called ‘vertical relationships’, which embody triads of elements, a modern periodic table connects together groups of elements into an orderly sequence. A periodic table consists of a horizontal dimension, containing dissimilar elements, as well as a vertical dimension with similar elements.”

“[I]n modern terms, metals form positive ions by the loss of electrons, while non-metals gain electrons to form negative ions. Such oppositely charged ions combine together to form neutrally charged salts like sodium chloride or calcium bromide. There are further complementary aspects of metals and non-metals. Metal oxides or hydroxides dissolve in water to form bases, while non-metal oxides or hydroxides dissolve in water to form acids. An acid and a base react together in a ‘neutralization’ reaction to form a salt and water. Bases and acids, just like metals and non-metals from which they are formed, are also opposite but complementary.”

“[T]he law of constant proportion, [is] the fact that when two elements combine together, they do so in a constant ratio of their weights. […] The fact that macroscopic samples consist of a fixed ratio by weight of two elements reflects the fact that two particular atoms are combining many times over and, since they have particular masses, the product will also reflect that mass ratio. […] the law of multiple proportions [refers to the fact that] [w]hen one element A combines with another one, B, to form more than one compound, there is a simple ratio between the combining masses of B in the two compounds. For example, carbon and oxygen combine together to form carbon monoxide and carbon dioxide. The weight of combined oxygen in the dioxide is twice as much as the weight of combined oxygen in the monoxide.”

“One of his greatest triumphs, and perhaps the one that he is best remembered for, is Mendeleev’s correct prediction of the existence of several new elements. In addition, he corrected the atomic weights of some elements as well as relocating other elements to new positions within the periodic table. […] But not all of Mendeleev’s predictions were so dramatically successful, a feature that seems to be omitted from most popular accounts of the history of the periodic table. […] he was unsuccessful in as many as nine out of his eighteen published predictions […] some of the elements involved the rare earths which resemble each other very closely and which posed a major challenge to the periodic table for many years to come. […] The discovery of the inert gases at the end of the 19th century [also] represented an interesting challenge to the periodic system […] in spite of Mendeleev’s dramatic predictions of many other elements, he completely failed to predict this entire group of elements (He, Ne, Ar, Kr, Xe, Rn). Moreover, nobody else predicted these elements or even suspected their existence. The first of them to be isolated was argon, in 1894 […] Mendeleev […] could not accept the notion that elements could be converted into different ones. In fact, after the Curies began to report experiments that suggested the breaking up of atoms, Mendeleev travelled to Paris to see the evidence for himself, close to the end of his life. It is not clear whether he accepted this radical new notion even after his visit to the Curie laboratory.”

“While chemists had been using atomic weights to order the elements there had been a great deal of uncertainty about just how many elements remained to be discovered. This was due to the irregular gaps that occurred between the values of the atomic weights of successive elements in the periodic table. This complication disappeared when the switch was made to using atomic number. Now the gaps between successive elements became perfectly regular, namely one unit of atomic number. […] The discovery of isotopes […] came about partly as a matter of necessity. The new developments in atomic physics led to the discovery of a number of new elements such as Ra, Po, Rn, and Ac which easily assumed their rightful places in the periodic table. But in addition, 30 or so more apparent new elements were discovered over a short period of time. These new species were given provisional names like thorium emanation, radium emanation, actinium X, uranium X, thorium X, and so on, to indicate the elements which seemed to be producing them. […] To Soddy, the chemical inseparability [of such elements] meant only one thing, namely that these were two forms, or more, of the same chemical element. In 1913, he coined the term ‘isotopes’ to signify two or more atoms of the same element which were chemically completely inseparable, but which had different atomic weights.”

“The popular view reinforced in most textbooks is that chemistry is nothing but physics ‘deep down’ and that all chemical phenomena, and especially the periodic system, can be developed on the basis of quantum mechanics. […] This is important because chemistry books, especially textbooks aimed at teaching, tend to give the impression that our current explanation of the periodic system is essentially complete. This is just not the case […] the energies of the quantum states for any many-electron atom can be approximately calculated from first principles although there is extremely good agreement with observed energy values. Nevertheless, some global aspects of the periodic table have still not been derived from first principles to this day. […] We know where the periods close because we know that the noble gases occur at elements 2, 10, 18, 36, 54, etc. Similarly, we have a knowledge of the order of orbital filling from observations but not from theory. The conclusion, seldom acknowledged in textbook accounts of the explanation of the periodic table, is that quantum physics only partly explains the periodic table. Nobody has yet deduced the order of orbital filling from the principles of quantum mechanics. […] The situation that exists today is that chemistry, and in particular the periodic table, is regarded as being fully explained by quantum mechanics. Even though this is not quite the case, the explanatory role that the theory continues to play is quite undeniable. But what seems to be forgotten […] is that the periodic table led to the development of many aspects of modern quantum mechanics, and so it is rather short-sighted to insist that only the latter explains the former.”

“[N]uclei with an odd number of protons are invariably more unstable than those with an even number of protons. This difference in stability occurs because protons, like electrons, have a spin of one half and enter into energy orbitals, two by two, with opposite spins. It follows that even numbers of protons frequently produce total spins of zero and hence more stable nuclei than those with unpaired proton spins as occurs in nuclei with odd numbers of protons […] The larger the nuclear charge, the faster the motion of inner shell electrons. As a consequence of gaining relativistic speeds, such inner electrons are drawn closer to the nucleus, and this in turn has the effect of causing greater screening on the outermost electrons which determine the chemical properties of any particular element. It has been predicted that some atoms should behave chemically in a manner that is unexpected from their presumed positions in the periodic table. Relativistic effects thus pose the latest challenge to test the universality of the periodic table. […] The conclusion [however] seem to be that chemical periodicity is a remarkably robust phenomenon.”

Some links:

Periodic table.
History of the periodic table.
IUPAC.
Jöns Jacob Berzelius.
Valence (chemistry).
Equivalent weight. Atomic weight. Atomic number.
Rare-earth element. Transuranium element. Glenn T. Seaborg. Island of stability.
Old quantum theory. Quantum mechanics. Electron configuration.
Benjamin Richter. John Dalton. Joseph Louis Gay-Lussac. Amedeo Avogadro. Leopold Gmelin. Alexandre-Émile Béguyer de Chancourtois. John Newlands. Gustavus Detlef Hinrichs. Julius Lothar Meyer. Dmitri Mendeleev. Henry Moseley. Antonius van den Broek.
Diatomic molecule.
Prout’s hypothesis.
Döbereiner’s triads.
Karlsruhe Congress.
Noble gas.
Einstein’s theory of Brownian motion. Jean Baptiste Perrin.
Quantum number. Molecular orbitals. Madelung energy ordering rule.
Gilbert N. Lewis. (“G. N. Lewis is possibly the most significant chemist of the 20th century not to have been awarded a Nobel Prize.”) Irving Langmuir. Niels Bohr. Erwin Schrödinger.
Ionization energy.
Synthetic element.
Alternative periodic tables.
Group 3 element.

December 18, 2017 Posted by | Books, Chemistry, Medicine, Physics | Leave a comment