Econstudentlog

Quantifying tradeoffs between fairness and accuracy in online learning

From a brief skim of this paper, which is coauthored by the guy giving this lecture, it looked to me like it covers many of the topics discussed in the lecture. So if you’re unsure as to whether or not to watch the lecture (…or if you want to know more about this stuff after you’ve watched the lecture) you might want to have a look at that paper. Although the video is long for a single lecture I would note that the lecture itself lasts only approximately one hour; the last 10 minutes are devoted to Q&A.

May 12, 2017 Posted by | Computer science, Economics, Lectures, Mathematics | Leave a comment

Information complexity and applications

I have previously here on the blog posted multiple lectures in my ‘lecture-posts’, or I have combined a lecture with other stuff (e.g. links such as those in the previous ‘random stuff’ post). I think such approaches have made me less likely to post lectures on the blog (if I don’t post a lecture soon after I’ve watched it, my experience tells me that I not infrequently simply never get around to posting it), and combined with this issue is also the issue that I don’t really watch a lot of lectures these days. For these reasons I have decided to start posting single lecture posts here on the blog; when I start thinking about the time expenditure of people reading along here in a way this approach actually also seems justified – although it might take me as much time/work to watch and cover, say, 4 lectures as it would take me to read and cover 100 pages of a textbook, the time expenditure required by a reader of the blog would be very different in those two cases (you’ll usually be able to read a post that took me multiple hours to write in a short amount of time, whereas ‘the time advantage’ of the reader is close to negligible (maybe not completely; search costs are not completely irrelevant) in the case of lectures). By posting multiple lectures in the same post I probably decrease the expected value of the time readers spend watching the content I upload, which seems suboptimal.

Here’s the youtube description of the lecture, which was posted a few days ago on the IAS youtube account:

“Over the past two decades, information theory has reemerged within computational complexity theory as a mathematical tool for obtaining unconditional lower bounds in a number of models, including streaming algorithms, data structures, and communication complexity. Many of these applications can be systematized and extended via the study of information complexity – which treats information revealed or transmitted as the resource to be conserved. In this overview talk we will discuss the two-party information complexity and its properties – and the interactive analogues of classical source coding theorems. We will then discuss applications to exact communication complexity bounds, hardness amplification, and quantum communication complexity.”

He actually decided to skip the quantum communication complexity stuff because of the time constraint. I should note that the lecture was ‘easy enough’ for me to follow most of it, so it is not really that difficult, at least not if you know some basic information theory.

A few links to related stuff (you can take these links as indications of what sort of stuff the lecture is about/discusses, if you’re on the fence about whether or not to watch it):
Computational complexity theory.
Shannon entropy.
Shannon’s source coding theorem.
Communication complexity.
Communications protocol.
Information-based complexity.
Hash function.
From Information to Exact Communication (in the lecture he discusses some aspects covered in this paper).
Unique games conjecture (Two-prover proof systems).
A Counterexample to Strong Parallel Repetition (another paper mentioned/briefly discussed during the lecture).
Pinsker’s inequality.

An interesting aspect I once again noted during this lecture is the sort of loose linkage you sometimes observe between the topics of game theory/microeconomics and computer science. Of course the link is made explicit a few minutes later in the talk when he discusses the unique games conjecture to which I link above, but it’s perhaps worth noting that the link is on display even before that point is reached. Around 38 minutes into the lecture he mentions that one of the relevant proofs ‘involves such things as Lagrange multipliers and optimization’. I was far from surprised, as from a certain point of view the problem he discusses at that point is conceptually very similar to some problems encountered in auction theory, where Lagrange multipliers and optimization problems are frequently encountered… If you are too unfamiliar with that field to realize how the similar problem might appear in an auction theory context, what you have there are instead auction partipants who prefer not to reveal their true willingness to pay; and some auction designs actually work in a very similar manner as does the (pseudo-)protocol described in the lecture, and are thus used to reveal it (for some subset of participants at least)).

March 12, 2017 Posted by | Computer science, Game theory, Lectures, Papers | Leave a comment

Random stuff

i. A very long but entertaining chess stream by Peter Svidler was recently uploaded on the Chess24 youtube account – go watch it here, if you like that kind of stuff. The fact that it’s five hours long is a reason to rejoice, not a reason to think that it’s ‘too long to be watchable’ – watch it in segments…

People interested in chess might also be interested to know that Magnus Carlsen has made an account on the ICC on which he has played, which was a result of his recent participation in the ICC Open 2016 (link). A requirement for participation in the tournament was that people had to know whom they were playing against (so there would be no ultra-strong GMs playing using anonymous accounts in the finals – they could use accounts with strange names, but people had to know whom they were playing), so now we know that Magnus Carlsen has played under the nick ‘stoptryharding’ on the ICC. Carlsen did not win the tournament as he lost to Grischuk in the semi-finals. Some very strong players were incidentally kicked out in the qualifiers, including Nepomniachtchi, the current #5 in the world on the FIDE live blitz ratings.

ii. A lecture:

iii. Below I have added some new words I’ve encountered, most of them in books I’ve read (I have not spent much time on vocabulary.com recently). I’m sure if I were to look all of them up on vocabulary.com some (many?) of them would not be ‘new’ to me, but that’s not going to stop me from including them here (I included the word ‘inculcate’ below for a reason…). Do take note of the spelling of some of these words – some of them are tricky ones included in Bryson’s Dictionary of Troublesome Words: A Writer’s Guide to Getting It Right, which people often get wrong for one reason or another:

Conurbation, epizootic, equable, circumvallation, contravallation, exiguous, forbear, louche, vituperative, thitherto, congeries, inculcate, obtrude, palter, idiolect, hortatory, enthalpy (see also wiki, or Khan Academy), trove, composograph, indite, mugginess, apodosis, protasis, invidious, inveigle, inflorescence, kith, anatopism, laudation, luxuriant, maleficence, misogamy (I did not know this was a word, and I’ll definitely try to remember it/that it is…), obsolescent, delible, overweening, parlay (this word probably does not mean what you think it means…), perspicacity, perspicuity, temblor, precipitous, quinquennial, razzmatazz, turpitude, vicissitude, vitriform.

iv. Some quotes from this excellent book review, by Razib Khan:

“relatively old-fashioned anti-religious sentiments […] are socially acceptable among American Left-liberals so long as their targets are white Christians (“punching up”) but more “problematic” and perhaps even “Islamophobic” when the invective is hurled at Muslim “people of color” (all Muslims here being tacitly racialized as nonwhite). […] Muslims, as marginalized people, are now considered part of a broader coalition on the progressive Left. […] most Left-liberals who might fall back on the term Islamophobia, don’t actually take Islam, or religion generally, seriously. This explains the rapid and strident recourse toward a racial analogy for Islamic identity, as that is a framework that modern Left-liberals and progressives have internalized and mastered. The problem with this is that Islam is not a racial or ethnic identity, it is a set of beliefs and practices. Being a Muslim is not about being who you are in a passive sense, but it is a proactive expression of a set of ideas about the world and your behavior within the world. This category error renders much of Left-liberal and progressive analysis of Islam superficial, and likely wrong.”

“To get a genuine understanding of a topic as broad and boundless as Islam one needs to both set aside emotional considerations, as Ben Affleck can not, and dig deeply into the richer and more complex empirical texture, which Sam Harris has not.”

“One of the most obnoxious memes in my opinion during the Obama era has been the popularization of the maxim that “The arc of the moral universe is long, but it bends towards justice.” It is smug and self-assured in its presentation. […] too often it becomes an excuse for lazy thinking and shallow prognostication. […] Modern Western liberals have a particular idea of what a religion is, and so naturally know that Islam is in many ways just like United Methodism, except with a hijab and iconoclasm. But a Western liberalism that does not take cultural and religious difference seriously is not serious, and yet all too often it is what we have on offer. […] On both the American Left and Right there is a tendency to not even attempt to understand Islam. Rather, stylized models are preferred which lead to conclusions which are already arrived at.”

“It’s fine to be embarrassed by reality. But you still need to face up to reality. Where Hamid, Harris, and I all start is the fact that the vast majority of the world’s Muslims do not hold views on social issues that are aligned with the Muslim friends of Hollywood actors. […] Before the Green Revolution I told people to expect there to be a Islamic revival, as 86 percent of Egyptians polled agree with the killing of apostates. This is not a comfortable fact for me, as I am technically an apostate.* But it is a fact. Progressives who exhibit a hopefulness about human nature, and confuse majoritarian democracy with liberalism and individual rights, often don’t want to confront these facts. […] Their polar opposites are convinced anti-Muslims who don’t need any survey data, because they know that Muslims have particular views a priori by virtue of them being Muslims. […] There is a glass half-full/half-empty aspect to the Turkish data. 95 percent of Turks do not believe apostates should be killed. This is not surprising, I know many Turkish atheists personally. But, 5 percent is not a reassuring fraction as someone who is personally an apostate. The ideal, and frankly only acceptable, proportion is basically 0 percent.”

“Harris would give a simple explanation for why Islam sanctions the death penalty for apostates. To be reductive and hyperbolic, his perspective seems to be that Islam is a totalitarian cult, and its views are quite explicit in the Quran and the Hadith. Harris is correct here, and the views of the majority of Muslims in Egypt (and many other Muslim nations) has support in Islamic law. The consensus historical tradition is that apostates are subject to the death penalty. […] the very idea of accepting atheists is taboo in most Arab countries”.

“Christianity which Christians hold to be fundamental and constitutive of their religion would have seemed exotic and alien even to St. Paul. Similarly, there is a much smaller body of work which makes the same case for Islam.

A précis of this line of thinking is that non-Muslim sources do not make it clear that there was in fact a coherent new religion which burst forth out of south-central Arabia in the 7th century. Rather, many aspects of Islam’s 7th century were myths which developed over time, initially during the Umayyad period, but which eventually crystallized and matured into orthodoxy under the Abbasids, over a century after the death of Muhammad. This model holds that the Arab conquests were actually Arab conquests, not Muslim ones, and that a predominantly nominally Syrian Christian group of Arab tribes eventually developed a new religion to justify their status within the empire which they built, and to maintain their roles within it. The mawali (convert) revolution under the Abbasids in the latter half of the 8th century transformed a fundamentally Arab ethnic sect, into a universal religion. […] The debate about the historical Jesus only emerged when the public space was secularized enough so that such discussions would not elicit violent hostility from the populace or sanction form the authorities. [T]he fact is that the debate about the historical Muhammad is positively dangerous and thankless. That is not necessarily because there is that much more known about Muhammad than Jesus, it is because post-Christian society allows for an interrogation of Christian beliefs which Islamic society does not allow for in relation to Islam’s founding narratives.”

“When it comes to understanding religion you need to start with psychology. In particular, cognitive psychology. This feeds into the field of evolutionary anthropology in relation to the study of religion. Probably the best introduction to this field is Scott Atran’s dense In Gods We Trust: The Evolutionary Landscape of Religion. Another representative work is Theological Incorrectness: Why Religious People Believe What They Shouldn’t. This area of scholarship purports to explain why religion is ubiquitous, and, why as a phenomenon it tends to exhibit a particular distribution of characteristics.

What cognitive psychology suggests is that there is a strong disjunction between the verbal scripts that people give in terms of what they say they believe, and the internal Gestalt mental models which seem to actually be operative in terms of informing how they truly conceptualize the world. […] Muslims may aver that their god is omniscient and omnipresent, but their narrative stories in response to life circumstances seem to imply that their believe god may not see or know all things at all moments.

The deep problem here is understood [by] religious professionals: they’ve made their religion too complex for common people to understand without their intermediation. In fact, I would argue that theologians themselves don’t really understand what they’re talking about. To some extent this is a feature, not a bug. If the God of Abraham is transformed into an almost incomprehensible being, then religious professionals will have perpetual work as interpreters. […] even today most Muslims can not read the Quran. Most Muslims do not speak Arabic. […] The point isn’t to understand, the point is that they are the Word of God, in the abstract. […] The power of the Quran is that the Word of God is presumably potent. Comprehension is secondary to the command.”

“the majority of the book […] is focused on political and social facts in the Islamic world today. […] That is the best thing about Islamic Exceptionalism, it will put more facts in front of people who are fact-starved, and theory rich. That’s good.”

“the term ‘fundamentalist’ in the context of islam isn’t very informative.” (from the comments).

Below I have added some (very) superficially related links of my own, most of them ‘data-related’ (in general I’d say that I usually find ‘raw data’ more interesting than ‘big ideas’):

*My short review of Theological Correctness, one of the books Razib mentions.

*Of almost 163,000 people who applied for asylum in Sweden last year, less than 500 landed a job (news article).

*An analysis of Danish data conducted by the Rockwool Foundation found that for family-reunificated spouses/relatives etc. to fugitives, 22 % were employed after having lived in Denmark for five years (the family-reunificated individuals, that is, not the fugitives themselves). Only one in three of the family-reunificated individuals had managed to find a job after having stayed here for fifteen years. The employment rate of family-reunificated to immigrants is 49 % for people who have been in the country for 5 years, and the number is below 60 % after 15 years. In Denmark, the employment rate of immigrants from non-Western countries was 47,7 % in November 2013, compared to 73,8 % for people of (…’supposedly’, see also my comments and observations here) Danish origin, according to numbers from Statistics Denmark (link). When you look at the economic performance of the people with fugitive status themselves, 34 % are employed after 5 years, but that number is almost unchanged a decade later – only 37 % are employed after they’ve stayed in Denmark for 15 years.
Things of course sometimes look even worse at the local level than these numbers reflect, because those averages are, well, averages; for example of the 244 fugitives and family-reunificated who had arrived in the Danish Elsinore Municipality within the last three years, exactly 5 of them were in full-time employment.

*Rotherham child sexual exploitation scandal (“The report estimated that 1,400 children had been sexually abused in the town between 1997 and 2013, predominantly by gangs of British-Pakistani Muslim men […] Because most of the perpetrators were of Pakistani heritage, several council staff described themselves as being nervous about identifying the ethnic origins of perpetrators for fear of being thought racist […] It was reported in June 2015 that about 300 suspects had been identified.”)

*A memorial service for the terrorist and murderer Omar El-Hussein who went on a shooting rampage in Copenhagen last year (link) gathered 1500 people, and 600-700 people also participated at the funeral (Danish link).

*Pew asked muslims in various large countries whether they thought ‘Suicide Bombing of Civilian Targets to Defend Islam [can] be Justified?’ More than a third of French muslims think that it can, either ‘often/sometimes’ (16 %) or ‘rarely’ (19 %). Roughly a fourth of British muslims think so as well (15 % often/sometimes, 9 % rarely). Of course in countries like Jordan, Nigeria, and Egypt the proportion of people who do not reply ‘never’ is above 50 %. In such contexts people often like to focus on what the majorities think, but I found it interesting to note that in only 2 of 11 countries (Germany – 7 %, & the US – 8 %) queried was it less than 10 % of muslims who thought suicide bombings were not either ‘often’ or ‘sometimes’ justified. Those numbers are some years old. Newer numbers (from non-Western countries only, unfortunately) tell us that e.g. fewer than two out of five Egyptians (38%) and fewer than three out of five (58%) Turks would answer ‘never’ when asked this question just a couple of years ago, in 2014.

*A few non-data related observations here towards the end. I do think Razib is right that cognitive psychology is a good starting point if you want to ‘understand religion’, but a more general point I would make is that there are many different analytical approaches to these sorts of topics which one might employ, and I think it’s important that one does not privilege any single analytical framework over the others (just to be clear, I’m not saying that Razib’s doing this); different approaches may yield different insights, perhaps at different analytical levels, and combining different approaches is likely to be very useful in order to get ‘the bigger picture’, or at least to not overlook important details. ‘History’, broadly defined, may provide one part of the explanatory model, cognitive psychology another part, mathematical anthropology (e.g. stuff like this) probably also has a role to play, etc., etc.. Survey data, economic figures, scientific literatures on a wide variety of topics like trust, norms, migration analysis, and conflict studies, e.g. those dealing with civil wars, may all help elucidate important questions of interest, if not by adding relevant data then by providing additional methodological approaches/scaffoldings which might be fruitfully employed to make sense of the data that is available.

v. Statistical Portrait of Hispanics in the United States.

vi. The Level and Nature of Autistic Intelligence. Autistics may be smarter than people have been led to believe:

“Autistics are presumed to be characterized by cognitive impairment, and their cognitive strengths (e.g., in Block Design performance) are frequently interpreted as low-level by-products of high-level deficits, not as direct manifestations of intelligence. Recent attempts to identify the neuroanatomical and neurofunctional signature of autism have been positioned on this universal, but untested, assumption. We therefore assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven’s Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults. We conclude that intelligence has been underestimated in autistics.”

I recall that back when I was diagnosed I was subjected to a battery of different cognitive tests of various kinds, and a few of those tests I recall thinking were very difficult, compared to how difficult they somehow ‘ought to be’ – it was like ‘this should be an easy task for someone who has the mental hardware to solve this type of problem, but I don’t seem to have that piece of hardware; I have no idea how to manipulate these objects in my head so that I might answer that question’. This was an at least somewhat unfamiliar feeling to me in a testing context, and I definitely did not have this experience when doing the Mensa admissions test later on, which was based on Raven’s matrices. Despite the fact that all IQ tests are supposed to measure pretty much the same thing I do not find it hard to believe that there are some details here which may complicate matters a bit in specific contexts, e.g. for people whose brains may not be structured quite the same way ‘ordinary brains’ are (to put it very bluntly). But of course this is just one study and a few personal impressions – more research is needed, etc. (Even though the effect size is huge.)

Slightly related to the above is also this link – I must admit that I find the title question quite interesting. I find it very difficult to picture characters featuring in books I’m reading in my mind, and so usually when I read books I don’t form any sort of coherent mental image of what the character looks like. It doesn’t matter to me, I don’t care. I have no idea if this is how other people read (fiction) books, or if they actually imagine what the characters look like more or less continuously while those characters are described doing the things they might be doing; to me it would be just incredibly taxing to keep even a simplified mental model of the physical attributes of a character in my mind for even a minute. I can recall specific traits like left-handedness and similar without much difficulty if I think the trait might have relevance to the plot, which has helped me while reading e.g. Agatha Christie novels before, but actively imagining what people look like in my mind I just find very difficult. I find it weird to think that some people might do something like that almost automatically, without thinking about it.

vii. Computer Science Resources. I recently shared the link with a friend, but of course she was already aware of the existence of this resource. Some people reading along here may not be, so I’ll include the link here. It has a lot of stuff.

June 8, 2016 Posted by | autism, Books, Chess, Computer science, Data, Demographics, Psychology, Random stuff, Religion | Leave a comment

Random stuff

I find it difficult to find the motivation to finish the half-finished drafts I have lying around, so this will have to do. Some random stuff below.

i.

(15.000 views… In some sense that seems really ‘unfair’ to me, but on the other hand I doubt neither Beethoven nor Gilels care; they’re both long dead, after all…)

ii. New/newish words I’ve encountered in books, on vocabulary.com or elsewhere:

Agleyperipeteia, disseverhalidom, replevinsocage, organdie, pouffe, dyarchy, tauricide, temerarious, acharnement, cadger, gravamen, aspersion, marronage, adumbrate, succotash, deuteragonist, declivity, marquetry, machicolation, recusal.

iii. A lecture:

It’s been a long time since I watched it so I don’t have anything intelligent to say about it now, but I figured it might be of interest to one or two of the people who still subscribe to the blog despite the infrequent updates.

iv. A few wikipedia articles (I won’t comment much on the contents or quote extensively from the articles the way I’ve done in previous wikipedia posts – the links shall have to suffice for now):

Duverger’s law.

Far side of the moon.

Preference falsification.

Russian political jokes. Some of those made me laugh (e.g. this one: “A judge walks out of his chambers laughing his head off. A colleague approaches him and asks why he is laughing. “I just heard the funniest joke in the world!” “Well, go ahead, tell me!” says the other judge. “I can’t – I just gave someone ten years for it!”).

Political mutilation in Byzantine culture.

v. World War 2, if you think of it as a movie, has a highly unrealistic and implausible plot, according to this amusing post by Scott Alexander. Having recently read a rather long book about these topics, one aspect I’d have added had I written the piece myself would be that an additional factor making the setting seem even more implausible is how so many presumably quite smart people were so – what at least in retrospect seems – unbelievably stupid when it came to Hitler’s ideas and intentions before the war. Going back to Churchill’s own life I’d also add that if you were to make a movie about Churchill’s life during the war, which you could probably relatively easily do if you were to just base it upon his own copious and widely shared notes, then it could probably be made into a quite decent movie. His own comments, remarks, and observations certainly made for a great book.

May 15, 2016 Posted by | Astronomy, Computer science, History, language, Lectures, Mathematics, Music, Random stuff, Russia, Wikipedia | Leave a comment

A few lectures

The sound quality of this lecture is not completely optimal – there’s a recurring echo popping up now and then which I found slightly annoying – but this should not keep you from watching the lecture. It’s a quite good lecture, and very accessible – I don’t really think you even need to know anything about genetics to follow most of what he’s talking about here; as far as I can tell it’s a lecture intended for people who don’t really know much about population genetics. He introduces key concepts as they are needed and he does not go much into the technical details which might cause people trouble (this of course also makes the lecture somewhat superficial, but you can’t get everything). If you’re the sort of person who wants details not included in the lecture you’re probably already reading e.g. Razib Khan (who incidentally recently blogged/criticized a not too dissimilar paper from the one discussed in the lecture, dealing with South Asia)…

I must admit that I actually didn’t like this lecture very much, but I figured I might as well include it in this post anyway.

I found some questions included and some aspects of the coverage a bit ‘too basic’ for my taste, but other people interested in chess reading along here may like Anna’s approach better; like Krause’s lecture I think it’s an accessible lecture, despite the fact that it actually covers many lines in quite a bit of detail. It’s a long lecture but I don’t think you necessarily need to watch all of it in one go (…or at all?) – the analysis of the second game, the Kortschnoj-Gheorghiu game, starts around 45 minutes in so that might for example be a good place to include a break, if a break is required.

February 1, 2016 Posted by | Anthropology, Archaeology, Chess, Computer science, Evolutionary biology, Genetics, History, Lectures | Leave a comment

A few lectures

Below are three new lectures from the Institute of Advanced Study. As far as I’ve gathered they’re all from an IAS symposium called ‘Lens of Computation on the Sciences’ – all three lecturers are computer scientists, but you don’t have to be a computer scientist to watch these lectures.

Should computer scientists and economists band together more and try to use the insights from one field to help solve problems in the other field? Roughgarden thinks so, and provides examples of how this might be done/has been done. Applications discussed in the lecture include traffic management and auction design. I’m not sure how much of this lecture is easy to follow for people who don’t know anything about either topic (i.e., computer science and economics), but I found it not too difficult to follow – it probably helped that I’ve actually done work on a few of the things he touches upon in the lecture, such as basic auction theory, the fixed point theorems and related proofs, basic queueing theory and basic discrete maths/graph theory. Either way there are certainly much more technical lectures than this one available at the IAS channel.

I don’t have Facebook and I’m not planning on ever getting a FB account, so I’m not really sure I care about the things this guy is trying to do, but the lecturer does touch upon some interesting topics in network theory. Not a great lecture in my opinion and occasionally I think the lecturer ‘drifts’ a bit, talking without saying very much, but it’s also not a terrible lecture. A few times I was really annoyed that you can’t see where he’s pointing that damn laser pointer, but this issue should not stop you from watching the video, especially not if you have an interest in analytical aspects of how to approach and make sense of ‘Big Data’.

I’ve noticed that Scott Alexander has said some nice things about Scott Aaronson a few times, but until now I’ve never actually read any of the latter guy’s stuff or watched any lectures by him. I agree with Scott (Alexander) that Scott (Aaronson) is definitely a smart guy. This is an interesting lecture; I won’t pretend I understood all of it, but it has some thought-provoking ideas and important points in the context of quantum computing and it’s actually a quite entertaining lecture; I was close to laughing a couple of times.

January 8, 2016 Posted by | Computer science, Economics, Game theory, Lectures, Mathematics, Physics | Leave a comment

Random stuff/Open Thread

i. A lecture on mathematical proofs:

ii. “In the fall of 1944, only seven percent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point.”

From wikipedia’s article on Strategic bombing during WW2. The article has a lot of stuff. The ‘RAF estimates of destruction of “built up areas” of major German cities’ numbers in the article made my head spin – they didn’t bomb the Germans back to the stone age, but they sure tried. Here’s another observation from the article:

“After the war, the U.S. Strategic Bombing Survey reviewed the available casualty records in Germany, and concluded that official German statistics of casualties from air attack had been too low. The survey estimated that at a minimum 305,000 were killed in German cities due to bombing and estimated a minimum of 780,000 wounded. Roughly 7,500,000 German civilians were also rendered homeless.” (The German population at the time was roughly 70 million).

iii. Also war-related: Eddie Slovik:

Edward Donald “Eddie” Slovik (February 18, 1920 – January 31, 1945) was a United States Army soldier during World War II and the only American soldier to be court-martialled and executed for desertion since the American Civil War.[1][2]

Although over 21,000 American soldiers were given varying sentences for desertion during World War II, including 49 death sentences, Slovik’s was the only death sentence that was actually carried out.[1][3][4]

During World War II, 1.7 million courts-martial were held, representing one third of all criminal cases tried in the United States during the same period. Most of the cases were minor, as were the sentences.[2] Nevertheless, a clemency board, appointed by the Secretary of War in the summer of 1945, reviewed all general courts-martial where the accused was still in confinement.[2][5] That Board remitted or reduced the sentence in 85 percent of the 27,000 serious cases reviewed.[2] The death penalty was rarely imposed, and those cases typically were for rapes or murders. […] In France during World War I from 1917 to 1918, the United States Army executed 35 of its own soldiers, but all were convicted of rape and/or unprovoked murder of civilians and not for military offenses.[13] During World War II in all theaters of the war, the United States military executed 102 of its own soldiers for rape and/or unprovoked murder of civilians, but only Slovik was executed for the military offense of desertion.[2][14] […] of the 2,864 army personnel tried for desertion for the period January 1942 through June 1948, 49 were convicted and sentenced to death, and 48 of those sentences were voided by higher authority.”

What motivated me to read the article was mostly curiosity about how many people were actually executed for deserting during the war, a question I’d never encountered any answers to previously. The US number turned out to be, well, let’s just say it’s lower than I’d expected it would be. American soldiers who chose to desert during the war seem to have had much, much better chances of surviving the war than had soldiers who did not. Slovik was not a lucky man. On a related note, given numbers like these I’m really surprised desertion rates were not much higher than they were; presumably community norms (”desertion = disgrace’, which would probably rub off on other family members…’) played a key role here.

iv. Chess and infinity. I haven’t posted this link before even though the thread is a few months old, and I figured that given that I just had a conversation on related matters in the comment section of SCC (here’s a link) I might as well repost some of this stuff here. Some key points from the thread (I had to make slight formatting changes to the quotes because wordpress had trouble displaying some of the numbers, but the content is unchanged):

u/TheBB:
“Shannon has estimated the number of possible legal positions to be about 1043. The number of legal games is quite a bit higher, estimated by Littlewood and Hardy to be around 1010^5 (commonly cited as 1010^50 perhaps due to a misprint). This number is so large that it can’t really be compared with anything that is not combinatorial in nature. It is far larger than the number of subatomic particles in the observable universe, let alone stars in the Milky Way galaxy.

As for your bonus question, a typical chess game today lasts about 40­ to 60 moves (let’s say 50). Let us say that there are 4 reasonable candidate moves in any given position. I suspect this is probably an underestimate if anything, but let’s roll with it. That gives us about 42×50 ≈ 1060 games that might reasonably be played by good human players. If there are 6 candidate moves, we get around 1077, which is in the neighbourhood of the number of particles in the observable universe.”

u/Wondersnite:
“To put 1010^5 into perspective:

There are 1080 protons in the Universe. Now imagine inside each proton, we had a whole entire Universe. Now imagine again that inside each proton inside each Universe inside each proton, you had another Universe. If you count up all the protons, you get (1080 )3 = 10240, which is nowhere near the number we’re looking for.

You have to have Universes inside protons all the way down to 1250 steps to get the number of legal chess games that are estimated to exist. […]

Imagine that every single subatomic particle in the entire observable universe was a supercomputer that analysed a possible game in a single Planck unit of time (10-43 seconds, the time it takes light in a vacuum to travel 10-20 times the width of a proton), and that every single subatomic particle computer was running from the beginning of time up until the heat death of the Universe, 101000 years ≈ 1011 × 101000 seconds from now.

Even in these ridiculously favorable conditions, we’d only be able to calculate

1080 × 1043 × 1011 × 101000 = 101134

possible games. Again, this doesn’t even come close to 1010^5 = 10100000 .

Basically, if we ever solve the game of chess, it definitely won’t be through brute force.”

v. An interesting resource which a friend of mine recently shared with me and which I thought I should share here as well: Nature Reviews – Disease Primers.

vi. Here are some words I’ve recently encountered on vocabulary.com: augury, spangle, imprimatur, apperception, contrition, ensconce, impuissance, acquisitive, emendation, tintinnabulation, abalone, dissemble, pellucid, traduce, objurgation, lummox, exegesis, probity, recondite, impugn, viscid, truculence, appurtenance, declivity, adumbrate, euphony, educe, titivate, cerulean, ardour, vulpine.

May 16, 2015 Posted by | Chess, Computer science, History, language, Lectures, Mathematics | Leave a comment

Belief-Based Stability in Coalition Formation with Uncertainty…

“In this book we present several novel concepts in cooperative game theory, but from a computer scientist’s point of view. Especially, we will look at a type of games called non-transferable utility games. […] In this book, we extend the classic stability concept of the non-transferable utility core by proposing new belief-based stability criteria under uncertainty, and illustrate how the new concept can be used to analyse the stability of a new type of belief-based coalition formation game. Mechanisms for reaching solutions of the new stable criteria are proposed and some real life application examples are studied. […] In Chapter 1, we first provide an introduction of topics in game theory that are relevant to the concepts discussed in this book. In Chapter 2, we review some relevant works from the literature, especially in cooperative game theory and multi-agent coalition formation problems. In Chapter 3, we discuss the effect of uncertainty in the agent’s beliefs on the stability of the games. A rule-based approach is adopted and the concepts of strong core and weak core are introduced. We also discuss the effect of precision of the beliefs on the stability of the coalitions. In Chapter 4, we introduce private beliefs in non-transferable utility (NTU) games, so that the preferences of the agents are no longer common knowledge. The impact of belief accuracy on stability is also examined. In Chapter 5, we study an application of the proposed belief-based stability concept, namely the buyer coalition problem, and we see how the proposed concept can be used in the evaluation of this multi-agent coalition formation problem. In Chapter 6, we combine the works of earlier chapters and produce a complete picture of the introduced concepts: non-transferable utility games with private beliefs and uncertainty. We conclude this book in Chapter 7.”

The above quote is from the preface of the book, which I finished yesterday. It deals with some issues I was slightly annoyed about not being covered in a previous micro course; my main problem being that it seemed to me back then that the question of belief accuracy and the role of this variable was not properly addressed in the models we looked at (‘people can have mistaken beliefs, and it seems obvious that the ways in which they’re wrong can affect which solutions are eventually reached’). The book makes the point that if you look at coalition formation in a context where it is not reasonable to assume that information is shared among coalition partners (because it is in the interest of the participants to keep their information/preferences/willingness to pay private), then the beliefs of the potential coalition partners may play a major role in determining which coalitions are feasible and which are ruled out. A key point is that in the model context explored by the authors, inaccurate beliefs of agents will expand the number of potential coalitions which are available, although coalition options ruled out by accurate beliefs are less stable than ones which are not. They do not discuss the fact that this feature is unquestionably a result of implicit assumptions made along the way which may not be true, and that inaccurate beliefs may also in some contexts conceivably lead to lower solution support in general (e.g. through variables such as disagreement, or, to think more in terms of concepts specifically included in their model framework, higher general instability of solutions which can feasibly be reached, making agents less likely to explore the option of participating in coalitions in the first place due to the lower payoffs associated with the available coalitions likely to be reached – dynamics such as these are not included in the coverage). I decided early on to not blog the stuff in this book in major detail because it’s not the kind of book where this makes sense to do (in my opinion), but if you’re curious about how they proceed, they talk quite a bit about the (classical) Core and discuss why this is not an appropriate solution concept to apply in the contexts they explore, and they then proceed to come up with new and better solution criteria, developed with the aid of some new variables and definitions along the way, in order to end up with some better solution concepts, their so-called ‘belief-based cores’, which are perhaps best thought of as extensions of the classical core concept. I should perhaps point out, as this may not be completely clear, that the beliefs they talk about deal both with the ‘state of nature’ (which in part of the coverage is assumed to be basically unobservable) and the preferences of agents involved.

If you want a sort of bigger picture idea of what this book is about, I should point out that in general you have two major sub-fields of game theory, dealing with cooperative and non-cooperative games respectively. Within the sub-field of cooperative games, a distinction is made between games and settings where utilities are transferable, and games/settings where they are not. This book belongs in the latter category; it deals with cooperative games in which utilities are non-transferable. The authors in the beginning make a big deal out of the distinction between whether or not utilities are transferable, and claim that the assumption that they’re not is the more plausible one; whereas they do have a point, I however also actually think the non-transferability assumption might in some of the specific examples included in the book be a borderline questionable assumption. To give an example, the non-transferability assumption seems in one context to imply that all potential coalition partners have the same amount of bargaining power. This assumption is plausible in some contexts, but wildly implausible in others (and I’m not sure the authors would agree with me about which contexts would belong to which category).

The professor teaching the most recent course in micro I took had a background in computer science, rather than economics – he was also Asian, but this perhaps goes without saying. This book is supposedly a computer science book, and they argue in the introduction that: “instead of looking at human beings, we study the problem from an intelligent software agent’s perspective.” However I don’t think a single one of the examples included in the book would be an example you could not also have found in a classic micro text, and it’s really hard to tell in many parts of the coverage that the authors aren’t economists with a background in micro – there seems to be quite a bit of field overlap here (this field overlap incidentally extends to areas of economics besides micro, is my impression; one econometrics TA I had, teaching the programming part of the course, was also a CS major). In the book they talk a bit about coalition formation mechanisms and approaches, such as propose-and-evaluate mechanisms and auction approaches, and they also touch briefly upon stuff like mechanism design. They state in the description that: “The book is intended for graduate students, engineers, and researchers in the field of artificial intelligence and computer science.” I think it’s really weird that they don’t include (micro-)economists as well, because this stuff is obviously quite close to/potentially relevant to the kind of work some of these people are working on.

There are a lot of definitions, theorems, and proofs in this book, and as usual when doing work on game theory you need to think very carefully about the stuff they cover to be able to follow it, but I actually found it reasonably accessible – the book is not terribly difficult to read. Though I would probably advise you against reading the book if you have not at least read an intro text on game theory. Although as already mentioned the book deals with an analytical context in which utilities are non-transferable, it should be pointed out that this assumption is sort of implicit in the coverage, in the sense that the authors don’t really deal with utility functions at all; the book only deals with preference relations, not utility functions, so it probably helps to be familiar with this type of analysis (e.g. by having studied (solved some problems) dealing with the kind of stuff included in the coverage in chapter 1 of Mas-Colell).

Part of the reason why I gave the book only two stars is that the authors are Chinese and their English is terrible. Another reason is that as is usually the case in game theory, these guys spend a lot of time and effort being very careful to define their terms and make correct inferences from the assumptions they make – but they don’t really end up saying very much.

February 28, 2015 Posted by | Books, Computer science, Economics | Leave a comment

Wikipedia articles of interest

i. Trade and use of saffron.

Saffron has been a key seasoning, fragrance, dye, and medicine for over three millennia.[1] One of the world’s most expensive spices by weight,[2] saffron consists of stigmas plucked from the vegetatively propagated and sterile Crocus sativus, known popularly as the saffron crocus. The resulting dried “threads”[N 1] are distinguished by their bitter taste, hay-like fragrance, and slight metallic notes. The saffron crocus is unknown in the wild; its most likely precursor, Crocus cartwrightianus, originated in Crete or Central Asia;[3] The saffron crocus is native to Southwest Asia and was first cultivated in what is now Greece.[4][5][6]

From antiquity to modern times the history of saffron is full of applications in food, drink, and traditional herbal medicine: from Africa and Asia to Europe and the Americas the brilliant red threads were—and are—prized in baking, curries, and liquor. It coloured textiles and other items and often helped confer the social standing of political elites and religious adepts. Ancient peoples believed saffron could be used to treat stomach upsets, bubonic plague, and smallpox.

Saffron crocus cultivation has long centred on a broad belt of Eurasia bounded by the Mediterranean Sea in the southwest to India and China in the northeast. The major producers of antiquity—Iran, Spain, India, and Greece—continue to dominate the world trade. […] Iran has accounted for around 90–93 percent of recent annual world production and thereby dominates the export market on a by-quantity basis. […]

The high cost of saffron is due to the difficulty of manually extracting large numbers of minute stigmas, which are the only part of the crocus with the desired aroma and flavour. An exorbitant number of flowers need to be processed in order to yield marketable amounts of saffron. Obtaining 1 lb (0.45 kg) of dry saffron requires the harvesting of some 50,000 flowers, the equivalent of an association football pitch’s area of cultivation, or roughly 7,140 m2 (0.714 ha).[14] By another estimate some 75,000 flowers are needed to produce one pound of dry saffron. […] Another complication arises in the flowers’ simultaneous and transient blooming. […] Bulk quantities of lower-grade saffron can reach upwards of US$500 per pound; retail costs for small amounts may exceed ten times that rate. In Western countries the average retail price is approximately US$1,000 per pound.[5] Prices vary widely elsewhere, but on average tend to be lower. The high price is somewhat offset by the small quantities needed in kitchens: a few grams at most in medicinal use and a few strands, at most, in culinary applications; there are between 70,000 and 200,000 strands in a pound.”

ii. Scramble for Africa.

“The “Scramble for Africa” (also the Partition of Africa and the Conquest of Africa) was the invasion and occupation, colonization and annexation of African territory by European powers during the period of New Imperialism, between 1881 and 1914. In 1870, 10 percent of Africa was under European control; by 1914 it was 90 percent of the continent, with only Abyssinia (Ethiopia) and Liberia still independent.”

Here’s a really neat illustration from the article:

Scramble-for-Africa-1880-1913

“Germany became the third largest colonial power in Africa. Nearly all of its overall empire of 2.6 million square kilometres and 14 million colonial subjects in 1914 was found in its African possessions of Southwest Africa, Togoland, the Cameroons, and Tanganyika. Following the 1904 Entente cordiale between France and the British Empire, Germany tried to isolate France in 1905 with the First Moroccan Crisis. This led to the 1905 Algeciras Conference, in which France’s influence on Morocco was compensated by the exchange of other territories, and then to the Agadir Crisis in 1911. Along with the 1898 Fashoda Incident between France and Britain, this succession of international crises reveals the bitterness of the struggle between the various imperialist nations, which ultimately led to World War I. […]

David Livingstone‘s explorations, carried on by Henry Morton Stanley, excited imaginations. But at first, Stanley’s grandiose ideas for colonisation found little support owing to the problems and scale of action required, except from Léopold II of Belgium, who in 1876 had organised the International African Association (the Congo Society). From 1869 to 1874, Stanley was secretly sent by Léopold II to the Congo region, where he made treaties with several African chiefs along the Congo River and by 1882 had sufficient territory to form the basis of the Congo Free State. Léopold II personally owned the colony from 1885 and used it as a source of ivory and rubber.

While Stanley was exploring Congo on behalf of Léopold II of Belgium, the Franco-Italian marine officer Pierre de Brazza travelled into the western Congo basin and raised the French flag over the newly founded Brazzaville in 1881, thus occupying today’s Republic of the Congo. Portugal, which also claimed the area due to old treaties with the native Kongo Empire, made a treaty with Britain on 26 February 1884 to block off the Congo Society’s access to the Atlantic.

By 1890 the Congo Free State had consolidated its control of its territory between Leopoldville and Stanleyville, and was looking to push south down the Lualaba River from Stanleyville. At the same time, the British South Africa Company of Cecil Rhodes was expanding north from the Limpopo River, sending the Pioneer Column (guided by Frederick Selous) through Matabeleland, and starting a colony in Mashonaland.

To the West, in the land where their expansions would meet, was Katanga, site of the Yeke Kingdom of Msiri. Msiri was the most militarily powerful ruler in the area, and traded large quantities of copper, ivory and slaves — and rumours of gold reached European ears. The scramble for Katanga was a prime example of the period. Rhodes and the BSAC sent two expeditions to Msiri in 1890 led by Alfred Sharpe, who was rebuffed, and Joseph Thomson, who failed to reach Katanga. Leopold sent four CFS expeditions. First, the Le Marinel Expedition could only extract a vaguely worded letter. The Delcommune Expedition was rebuffed. The well-armed Stairs Expedition was given orders to take Katanga with or without Msiri’s consent. Msiri refused, was shot, and the expedition cut off his head and stuck it on a pole as a “barbaric lesson” to the people. The Bia Expedition finished the job of establishing an administration of sorts and a “police presence” in Katanga.

Thus, the half million square kilometres of Katanga came into Leopold’s possession and brought his African realm up to 2,300,000 square kilometres (890,000 sq mi), about 75 times larger than Belgium. The Congo Free State imposed such a terror regime on the colonised people, including mass killings and forced labour, that Belgium, under pressure from the Congo Reform Association, ended Leopold II’s rule and annexed it in 1908 as a colony of Belgium, known as the Belgian Congo. […]

“Britain’s administration of Egypt and the Cape Colony contributed to a preoccupation over securing the source of the Nile River. Egypt was overrun by British forces in 1882 (although not formally declared a protectorate until 1914, and never an actual colony); Sudan, Nigeria, Kenya and Uganda were subjugated in the 1890s and early 20th century; and in the south, the Cape Colony (first acquired in 1795) provided a base for the subjugation of neighbouring African states and the Dutch Afrikaner settlers who had left the Cape to avoid the British and then founded their own republics. In 1877, Theophilus Shepstone annexed the South African Republic (or Transvaal – independent from 1857 to 1877) for the British Empire. In 1879, after the Anglo-Zulu War, Britain consolidated its control of most of the territories of South Africa. The Boers protested, and in December 1880 they revolted, leading to the First Boer War (1880–81). British Prime Minister William Gladstone signed a peace treaty on 23 March 1881, giving self-government to the Boers in the Transvaal. […] The Second Boer War, fought between 1899 and 1902, was about control of the gold and diamond industries; the independent Boer republics of the Orange Free State and the South African Republic (or Transvaal) were this time defeated and absorbed into the British Empire.”

There are a lot of unsourced claims in the article and some parts of it actually aren’t very good, but this is a topic about which I did not know much (I had no idea most of colonial Africa was acquired by the European powers as late as was actually the case). This is another good map from the article to have a look at if you just want the big picture.

iii. Cursed soldiers.

“The cursed soldiers (that is, “accursed soldiers” or “damned soldiers”; Polish: Żołnierze wyklęci) is a name applied to a variety of Polish resistance movements formed in the later stages of World War II and afterwards. Created by some members of the Polish Secret State, these clandestine organizations continued their armed struggle against the Stalinist government of Poland well into the 1950s. The guerrilla warfare included an array of military attacks launched against the new communist prisons as well as MBP state security offices, detention facilities for political prisoners, and concentration camps set up across the country. Most of the Polish anti-communist groups ceased to exist in the late 1940s or 1950s, hunted down by MBP security services and NKVD assassination squads.[1] However, the last known ‘cursed soldier’, Józef Franczak, was killed in an ambush as late as 1963, almost 20 years after the Soviet take-over of Poland.[2][3] […] Similar eastern European anti-communists fought on in other countries. […]

Armia Krajowa (or simply AK)-the main Polish resistance movement in World War II-had officially disbanded on 19 January 1945 to prevent a slide into armed conflict with the Red Army, including an increasing threat of civil war over Poland’s sovereignty. However, many units decided to continue on with their struggle under new circumstances, seeing the Soviet forces as new occupiers. Meanwhile, Soviet partisans in Poland had already been ordered by Moscow on June 22, 1943 to engage Polish Leśni partisans in combat.[6] They commonly fought Poles more often than they did the Germans.[4] The main forces of the Red Army (Northern Group of Forces) and the NKVD had begun conducting operations against AK partisans already during and directly after the Polish Operation Tempest, designed by the Poles as a preventive action to assure Polish rather than Soviet control of the cities after the German withdrawal.[5] Soviet premier Joseph Stalin aimed to ensure that an independent Poland would never reemerge in the postwar period.[7] […]

The first Polish communist government, the Polish Committee of National Liberation, was formed in July 1944, but declined jurisdiction over AK soldiers. Consequently, for more than a year, it was Soviet agencies like the NKVD that dealt with the AK. By the end of the war, approximately 60,000 soldiers of the AK had been arrested, and 50,000 of them were deported to the Soviet Union’s gulags and prisons. Most of those soldiers had been captured by the Soviets during or in the aftermath of Operation Tempest, when many AK units tried to cooperate with the Soviets in a nationwide uprising against the Germans. Other veterans were arrested when they decided to approach the government after being promised amnesty. In 1947, an amnesty was passed for most of the partisans; the Communist authorities expected around 12,000 people to give up their arms, but the actual number of people to come out of the forests eventually reached 53,000. Many of them were arrested despite promises of freedom; after repeated broken promises during the first few years of communist control, AK soldiers stopped trusting the government.[5] […]

The persecution of the AK members was only a part of the reign of Stalinist terror in postwar Poland. In the period of 1944–56, approximately 300,000 Polish people had been arrested,[21] or up to two million, by different accounts.[5] There were 6,000 death sentences issued, the majority of them carried out.[21] Possibly, over 20,000 people died in communist prisons including those executed “in the majesty of the law” such as Witold Pilecki, a hero of Auschwitz.[5] A further six million Polish citizens (i.e., one out of every three adult Poles) were classified as suspected members of a ‘reactionary or criminal element’ and subjected to investigation by state agencies.”

iv. Affective neuroscience.

Affective neuroscience is the study of the neural mechanisms of emotion. This interdisciplinary field combines neuroscience with the psychological study of personality, emotion, and mood.[1]

This article is actually related to the Delusion and self-deception book, which covered some of the stuff included in this article, but I decided I might as well include the link in this post. I think some parts of the article are written in a somewhat different manner than most wiki articles – there are specific paragraphs briefly covering the results of specific meta-analyses conducted in this field. I can’t really tell from this article if I actually like this way of writing a wiki article or not.

v. Hamming distance. Not a long article, but this is a useful concept to be familiar with:

“In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. […]

The Hamming distance is named after Richard Hamming, who introduced it in his fundamental paper on Hamming codes Error detecting and error correcting codes in 1950.[1] It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate.”

vi. Menstrual synchrony. I came across that one recently in a book, and when I did it was obvious that the author had not read this article, and lacked some knowledge included in this article (the phenomenon was assumed to be real in the coverage, and theory was developed assuming it was real which would not make sense if it was not). I figured if that person didn’t know this stuff, a lot of other people – including people reading along here – probably also do not, so I should cover this topic somewhere. This is an obvious place to do so. Okay, on to the article coverage:

Menstrual synchrony, also called the McClintock effect,[2] is the alleged process whereby women who begin living together in close proximity experience their menstrual cycle onsets (i.e., the onset of menstruation or menses) becoming closer together in time than previously. “For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period.”[3]

Martha McClintock’s 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two women or more women become closer together in time than they were several months earlier.[3] Several mechanisms have been hypothesized to cause synchronization.[4]

After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony including McClintock’s study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. A 2013 review of menstrual synchrony concluded that menstrual synchrony is doubtful.[4] […] in a recent systematic review of menstrual synchrony, Harris and Vitzthum concluded that “In light of the lack of empirical evidence for MS [menstrual synchrony] sensu stricto, it seems there should be more widespread doubt than acceptance of this hypothesis.” […]

The experience of synchrony may be the result of the mathematical fact that menstrual cycles of different frequencies repeatedly converge and diverge over time and not due to a process of synchronization.[12] It may also be due to the high probability of menstruation overlap that occurs by chance.[6]

 

December 4, 2014 Posted by | Biology, Botany, Computer science, Geography, History, Medicine, Neurology, Psychology, Wikipedia | Leave a comment

A few lectures

A few lectures from Gresham College:

An interesting lecture on symmetry patterns and symmetry breaking. A lot of the discussion of the relevant principles takes animal skin patterns and -movement patterns as the starting point for the analysis, leading to interesting quotes/observations like these: “Theorem: A spotted animal can have a striped tail, but a striped animal cannot have a spotted tail”, and “…but it can’t result in a horse, because a horse is not spherically symmetric”.

He also talks about e.g. snowflakes and sand dunes and this does not feel like a theoretical lecture at all – he’s sort of employing an applied maths approach to this topic which I like. Despite the fact that it’s basically a mathematics lecture it’s quite easy to follow and I enjoyed watching it.

He takes a long time to get started and he doesn’t actually ever say much about the non-Euclidian stuff (he never even explicitly distinguishes hyperbolic geometry from elliptic geometry using those terms). He’s also not completely precise in his language during the entire lecture; at one point he emphasizes the fact that three specific choices used in a proof were ‘mutually exclusive’ as though that was what was the key, even though what’s actually critical is that they were also collectively exhaustive – a point he fails to mention (and I’d assume it would be easy for a viewer not reasonably well-versed in mathematics to mix up these distinctions if they were not already familiar with the concepts). But maybe you’ll find it interesting anyway. It wasn’t a particularly bad lecture, I’d just expected a little more. I know where to go look if one wants a more complete picture of the things briefly touched upon in this lecture and I’ve looked at that stuff before, but I’m certainly not going to read Penrose again any time soon – that stuff’s way too much work considering the benefits of knowing that stuff in details (if I’m even theoretically able to obtain knowledge of the details – some of that stuff is really hard).

December 7, 2013 Posted by | Biology, Computer science, History, Lectures, Mathematics | 2 Comments

Stuff

i. Econometric methods for causal evaluation of education policies and practices: a non-technical guide. This one is ‘work-related’; in one of my courses I’m writing a paper and this working paper is one (of many) of the sources I’m planning on using. Most of the papers I work with are unfortunately not freely available online, which is part of why I haven’t linked to them here on the blog.

I should note that there are no equations in this paper, so you should focus on the words ‘a non-technical guide’ rather than the words ‘econometric methods’ in the title – I think this is a very readable paper for the non-expert as well. I should of course also note that I have worked with most of these methods in a lot more detail, and that without the math it’s very hard to understand the details and really know what’s going on e.g. when applying such methods – or related methods such as IV methods on panel data, a topic which was covered in another class just a few weeks ago but which is not covered in this paper.

This is a place to start if you want to know something about applied econometric methods, particularly if you want to know how they’re used in the field of educational economics, and especially if you don’t have a strong background in stats or math. It should be noted that some of the methods covered see wide-spread use in other areas of economics as well; IV is widely used, and the difference-in-differences estimator have seen a lot of applications in health economics.

ii. Regulating the Way to Obesity: Unintended Consequences of Limiting Sugary Drink Sizes. The law of unintended consequences strikes again.

You could argue with some of the assumptions made here (e.g. that prices (/oz) remain constant) but I’m not sure the findings are that sensitive to that assumption, and without an explicit model of the pricing mechanism at work it’s mostly guesswork anyway.

iii. A discussion about the neurobiology of memory. Razib Khan posted a short part of the video recently, so I decided to watch it today. A few relevant wikipedia links: Memory, Dead reckoning, Hebbian theory, Caenorhabditis elegans. I’m skeptical, but I agree with one commenter who put it this way: “I know darn well I’m too ignorant to decide whether Randy is possibly right, or almost certainly wrong — yet I found this interesting all the way through.” I also agree with another commenter who mentioned that it’d have been useful for Gallistel to go into details about the differences between short term and long term memory and how these differences relate to the problem at hand.

iv. Plos-One: Low Levels of Empathic Concern Predict Utilitarian Moral Judgment.

“An extensive body of prior research indicates an association between emotion and moral judgment. In the present study, we characterized the predictive power of specific aspects of emotional processing (e.g., empathic concern versus personal distress) for different kinds of moral responders (e.g., utilitarian versus non-utilitarian). Across three large independent participant samples, using three distinct pairs of moral scenarios, we observed a highly specific and consistent pattern of effects. First, moral judgment was uniquely associated with a measure of empathy but unrelated to any of the demographic or cultural variables tested, including age, gender, education, as well as differences in “moral knowledge” and religiosity. Second, within the complex domain of empathy, utilitarian judgment was consistently predicted only by empathic concern, an emotional component of empathic responding. In particular, participants who consistently delivered utilitarian responses for both personal and impersonal dilemmas showed significantly reduced empathic concern, relative to participants who delivered non-utilitarian responses for one or both dilemmas. By contrast, participants who consistently delivered non-utilitarian responses on both dilemmas did not score especially high on empathic concern or any other aspect of empathic responding.”

In case you were wondering, the difference hasn’t got anything to do with a difference in the ability to ‘see things from the other guy’s point of view’: “the current study demonstrates that utilitarian responders may be as capable at perspective taking as non-utilitarian responders. As such, utilitarian moral judgment appears to be specifically associated with a diminished affective reactivity to the emotions of others (empathic concern) that is independent of one’s ability for perspective taking”.

On a small sidenote, I’m not really sure I get the authors at all – one of the questions they ask in the paper’s last part is whether ‘utilitarians are simply antisocial?’ This is such a stupid way to frame this I don’t even know how to begin to respond; I mean, utilitarians make better decisions that save more lives, and that’s consistent with them being antisocial? I should think the ‘social’ thing to do would be to save as many lives as possible. Dead people aren’t very social, and when your actions cause more people to die they also decrease the scope for future social interaction.

v. Lastly, some Khan Academy videos:

(Relevant links: Compliance, Preload).

(This one may be very hard to understand if you haven’t covered this stuff before, but I figured I might as well post it here. If you don’t know e.g. what myosin and actin is you probably won’t get much out of this video. If you don’t watch it, this part of what’s covered is probably the most important part to take away from it.)

It’s been a long time since I checked out the Brit Cruise information theory playlist, and I was happy to learn that he’s updated it and added some more stuff. I like the way he combines historical stuff with a ‘how does it actually work, and how did people realize that’s how it works’ approach – learning how people figured out stuff is to me sometimes just as fascinating as learning what they figured out:

(Relevant wikipedia links: Leyden jar, Electrostatic generator, Semaphore line. Cruise’ play with the cat and the amber may look funny, but there’s a point to it: “The Greek word for amber is ηλεκτρον (“elektron”) and is the origin of the word “electricity”.” – from the first link).

(Relevant wikipedia links: Galvanometer, Morse code)

April 14, 2013 Posted by | Cardiology, Computer science, Cryptography, Econometrics, Khan Academy, Medicine, Neurology, Papers, Physics, Random stuff, Statistics | Leave a comment

Khan Academy videos of interest

Took me a minute to solve without hints. I had to scribble a few numbers down (like Khan does in the video), but you should be able to handle it without hints. (Actually I think some of the earlier brainteasers on the playlist are harder than this one and that some of the later ones are easier, but it’s a while since I saw the first ones.)


Much more here.

Naturally this is from the computer science section.

It’s been a while since I’ve last been to Khan Academy – it seems that these days they have an entire section about influenza.

February 10, 2013 Posted by | Cardiology, Computer science, Infectious disease, Khan Academy, Lectures, Mathematics, Medicine | Leave a comment

Wikipedia articles of interest

i. Huia (featured).

“The Huia (Māori: [ˈhʉia]; Heteralocha acutirostris) was the largest species of New Zealand wattlebird and was endemic to the North Island of New Zealand.”

What they looked like:

800px-Keulemans_Huias

“Even though the Huia is frequently mentioned in biology and ornithology textbooks because of this striking dimorphism, not much is known about its biology; it was little studied before it was driven to extinction. The Huia is one of New Zealand’s best known extinct birds because of its bill shape, its sheer beauty and special place in Māori culture and oral tradition. […]

The Huia had no fear of people; females allowed themselves to be handled on the nest,[8] and birds could easily be captured by hand.[11] […]

The Huia was found throughout the North Island before humans arrived in New Zealand. The Māori arrived around 800 years ago, and by the arrival of European settlers in the 1840s, habitat destruction and hunting had reduced the bird’s range to the southern North Island.[13] However, Māori hunting pressures on the Huia were limited to some extent by traditional protocols. The hunting season was from May to July when the bird’s plumage was in prime condition, while a rāhui (hunting ban) was enforced in spring and summer.[15] It was not until European settlement that the Huia’s numbers began to decline severely, due mainly to two well-documented factors: widespread deforestation and overhunting. […]

Habitat destruction and the predations of introduced species were problems faced by all New Zealand birds, but in addition the Huia faced massive pressure from hunting. Due to its pronounced sexual dimorphism and its beauty, Huia were sought after as mounted specimens by wealthy collectors in Europe[42] and by museums all over the world.[15][20] These individuals and institutions were willing to pay large sums of money for good specimens, and the overseas demand created a strong financial incentive for hunters in New Zealand.[42]

ii. British colonization of the Americas. Not very detailed, but this article is a good place to start if one wants to read about the various colonies; it has a lot of links.

iii. Iron Dome.

Iron Dome (Hebrew: כִּפַּת בַּרְזֶל, kipat barzel) also known as “Iron Cap[6] is a mobile all-weather air defense system[5] developed by Rafael Advanced Defense Systems.[4] It is a missile system designed to intercept and destroy short-range rockets and artillery shells fired from distances of 4 to 70 kilometers away and whose trajectory would take them to a populated area.[7][8] […] The system, created as a defensive countermeasure to the rocket threat against Israel‘s civilian population on its northern and southern borders, uses technology first employed in Rafael’s SPYDER system. Iron Dome was declared operational and initially deployed on 27 March 2011 near Beersheba.[10] On 7 April 2011, the system successfully intercepted a Grad rocket launched from Gaza for the first time.[11] On 10 March 2012, The Jerusalem Post reported that the system shot down 90% of rockets launched from Gaza that would have landed in populated areas.[8] By November 2012, it had intercepted 400+ rockets.[12][13] Based on this success, Defense reporter Mark Thompson estimates that Iron Dome is the most effective and most tested missile shield in existence.[14]

The Iron Dome system is also effective against aircraft up to an altitude of 32,800 ft (10,000 m).[15] […]

800px-Iron_Dome_near_Sderot

During the 2006 Second Lebanon War, approximately 4,000 Hezbollah-fired rockets (the great majority of which were short-range Katyusha rockets) landed in northern Israel, including on Haifa, the country’s third largest city. The massive rocket barrage killed 44 Israeli civilians[16] and caused some 250,000 Israeli citizens to evacuate and relocate to other parts of Israel while an estimated 1,000,000 Israelis were confined in or near shelters during the conflict.[17]

To the south, more than 4,000 rockets and 4,000 mortar bombs were fired into Israel from Gaza between 2000 and 2008, principally by Hamas. Almost all of the rockets fired were Qassams launched by 122 mm Grad launchers smuggled into the Gaza Strip, giving longer range than other launch methods. Nearly 1,000,000 Israelis living in the south are within rocket range, posing a serious security threat to the country and its citizens.[18]

In February 2007, Defense Minister Amir Peretz selected Iron Dome as Israel’s defensive solution to this short-range rocket threat.[19] […]

In November 2012, during Operation Pillar of Defense, the Iron Dome’s effectiveness was estimated by Israeli officials at between 75 and 95 percent.[88] According to Israeli officials, of the approximately 1,000 missiles and rockets fired into Israel by Hamas from the beginning of Operation Pillar of Defense up to November 17, 2012, Iron Dome identified two thirds as not posing a threat and intercepted 90 percent of the remaining 300.[89] During this period the only Israeli casualties were three individuals killed in missile attacks after a malfunction of the Iron Dome system.[90]

In comparison with other air defense systems, the effectiveness rate of Iron Dome is very high.[88]

iv. Evolution of cetaceans (whales and dolphins). They’re a lot ‘younger’ than I thought.

v. Curiosity rover.

PIA16239_High-Resolution_Self-Portrait_by_Curiosity_Rover_Arm_Camera

This is an actual (composite) picture of a robot on another planet. At this moment it is walking around doing scientific experiments. On another planet. I’ll say it again: Living in the 21st century is awesome.

vi. Halting Problem.

“In computability theory, the halting problem can be stated as follows: “Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever“. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem. […]

The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e. all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program’s execution; it can take arbitrarily long, and use arbitrarily much storage space, before halting. The question is simply whether the given program will ever halt on a particular input. […]

One approach to the problem might be to run the program for some number of steps and check if it halts. But if the program does not halt, it is unknown whether the program will eventually halt or run forever.

Turing proved there cannot exist an algorithm which will always correctly decide whether, for a given arbitrary program and its input, determine the program halts when run with that input; the essence of Turing’s proof is that any such algorithm can be made to contradict itself, and therefore cannot be correct. […]

The halting problem is historically important because it was one of the first problems to be proved undecidable.”

vii. Fetal Alcohol Syndrome.

Fetal alcohol syndrome (FAS) is a pattern of mental and physical defects that can develop in a fetus in association with high levels of alcohol consumption during pregnancy. […]

Alcohol crosses the placental barrier and can stunt fetal growth or weight, create distinctive facial stigmata, damage neurons and brain structures, which can result in psychological or behavioral problems, and cause other physical damage.[6][7][8] Surveys found that in the United States, 10–15% of pregnant women report having recently drunk alcohol, and up to 30% drink alcohol at some point during pregnancy.[9][10][11]

The main effect of FAS is permanent central nervous system damage, especially to the brain. Developing brain cells and structures can be malformed or have development interrupted by prenatal alcohol exposure; this can create an array of primary cognitive and functional disabilities (including poor memory, attention deficits, impulsive behavior, and poor cause-effect reasoning) as well as secondary disabilities (for example, predispositions to mental health problems and drug addiction).[8][12] Alcohol exposure presents a risk of fetal brain damage at any point during a pregnancy, since brain development is ongoing throughout pregnancy.[13]

Fetal alcohol exposure is the leading known cause of mental retardation in the Western world.[14][15] In the United States and Europe, the FAS prevalence rate is estimated to be between 0.2-2 in every 1000 live births.[16][17] FAS should not be confused with Fetal Alcohol Spectrum Disorders (FASD), a condition which describes a continuum of permanent birth defects caused by maternal consumption of alcohol during pregnancy, which includes FAS, as well as other disorders, and which affects about 1% of live births in the US.[18][19][20][21] The lifetime medical and social costs of FAS are estimated to be as high as US$800,000 per child born with the disorder.[22]

That’s a US estimate, but I think a Danish one would be within the same order of magnitude. Imagine how the incentives of expectant mothers would change if we fined females who gave birth to a child with FAS, letting the fine be some fraction of the total estimated social costs. And remind me again why we do not do this?

December 15, 2012 Posted by | Astronomy, Biology, Computer science, Evolutionary biology, History, Mathematics, Medicine, Neurology, Wikipedia, Zoology | Leave a comment

Stuff

Some links and stuff from around the web:

i. A lecture on Averaging algorithms and distributed optimization. He’s quite good but this is not for everyone; you need a maths/stats background to some extent to understand what’s going on. I’ve seen many types of lectures online, but this one is probably one of the ones ‘closest’ to the type of lectures that are available to students where I study the kind of stuff I study, in terms of the format; there’s a lot of math, there’s a very clearly defined structure and the lecturer knows exactly what he’s supposed to cover during the lecture, you proceed from the simple and then add some complexity/exceptions etc. along the way, some i’s and j’s will be mixed up and a plus or minus sign will need to be corrected somewhere, the lecturer rarely asks the people attending class any questions and if it’s a good lecture there will not be a lot of questions from the audience either. It reminded me of the econometrics lectures I had some time ago, also because the stuff covered in the lecture relates a bit to material covered back then (‘gradient-like methods’, the convergence properties of various optimization algorithms, etc.).

ii. Cyanide & happiness. I found the comic a week ago or so and I like it. A few examples (click to view full size):

 

iii. From edge.org: What is life? A 21st century perspective, by Craig Venter. Not a bad way to spend an hour of your life.

iv. A list of free statistical software available online. There are a lot of those around!

v. An awesome retraction-story. The peer-review process is not always bulletproof:

“[Hyung-In Moon] suggested preferred reviewers during the submission which were him or colleagues under bogus identities and accounts. In some cases the names of real people were provided (so if Googling them, you would see that they did exist) but he created email accounts for them which he or associates had access to and which were then used to provide peer review comments. In other cases he just made up names and email addresses. The review comments submitted by these reviewers were almost always favourable but still provided suggestions for paper improvement.” (via Ed Yong)

vi. “In a study now in press in Neurobiology of Aging (download PDF copy here), we studied the effects of healthy aging on how the brain processes different kinds of visual information. Based on prior work showing that visual attention towards objects predominantly recruited regions of the medial temporal lobe (MTL), compared to attention towards positions, we tested whether this specialization would wither with increasing age.

Basically, we tested the level of brain specialization by comparing the BOLD fMRI signal directly between object processing and position processing. We looked at each MTL structure individually by analyzing the results in each individual brain (native space) rather than relying on spatial normalization of brains, which is known to induce random and systematic distortions in MTL structures (see here and here for PDF of conference presentations I’ve had on this).

Running the test with functional MRI, we found that several regions showed a change in specialization. During encoding, the right amygdala and parahippocampal cortex, and tentatively other surrounding MTL regions, showed such decreases in specialization.

During preparation and rehearsal, no changes reached significance.

However, during the stage of recognition, more or less the entire MTL region demonstrated detrimental changes with age. That is, with increasing age, those regions that tend to show a strong response to object processing compared to spatial processing, now dwindle in this effect. At higher ages, such as 75+, the ability of the brain to differentiate between object and spatial content is gone in many crucial MTL structures.

This suggests that at least one important change with increasing age is its ability to differentiate between different kinds of content. If your brain is unable to selectively focus on one kind of information (and possibly inhibit processing of other aspects of the information), then neither learning or memory can operate successfully.” (link)

August 28, 2012 Posted by | Biology, comics, Computer science, Lectures, Neurology, Statistics, Studies | Leave a comment

More Khan Academy stuff you should know about

It’s been a while since I’ve been to Khan Academy (actually getting the Kepler badge sort of killed my motivation for a while), but I revisited the site earlier today and I realized that they’ve launched a brand new computer science section which looks really neat. Intro video below:

August 27, 2012 Posted by | Computer science, Khan Academy | Leave a comment

A few notes on Singh’s The Code Book

It seems that nine out of ten readers don’t read/like my book posts, so I probably will try to hold back on those in the future or at least put a bit less effort into them. But I thought I’d just post a quick note here anyway:

I spent part of yesterday and a big chunk of today reading Simon Singh’s The Code Book. I generally liked the book – if you liked Fermat’s last Theorem, you’ll probably like this book too. I didn’t think much of the last two chapters, but the rest of it was quite entertaining and instructive. You know you have your hands on a book that covers quite a bit of stuff when you find yourself looking up something in an archaeology textbook to check some details in a book about cryptography (the book has a brief chapter which covers the decipherment of the linear B script, among other things). Having read the book, I can’t not mention here that I blogged this some time ago – needless to say, back then I had no idea how big of a name Hellman is ‘in the cryptography business’ (this was a very big deal – in Singh’s words: “The Diffie-Hellman-Merkle key exchange scheme […] is one of the most counterintuitive discoveries in the history of science, and it forced the cryptographic establishment to rewrite the rules of encryption. […] Hellman had shattered one of the tenets of cryptography and proved that Bob and Alice did not need to meet to agree a secret key.” (p.267))

August 22, 2012 Posted by | Books, Computer science, Cryptography | Leave a comment

Wikipedia articles of interest

i. Shannon–Hartley theorem. Muller talked a little bit about this one in one of the lectures, I don’t remember which, but it’s probably one of the wave-lectures. His coverage is less technical than wikipedia’s. I was considering not including this link because I previously linked to wikipedia’s closely-related article about the Noisy-channel coding theorem, but I decided to do it anyway. From the article:

“In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon’s channel capacity for such a communication link, a bound on the maximum amount of error-free digital data (that is, information) that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. […]

Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley theorem states the channel capacity C, meaning the theoretical tightest upper bound on the information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data that can be sent with a given average signal power S through an analog communication channel subject to additive white Gaussian noise of power N, is:

 C =  B \log_2 \left( 1+\frac{S}{N} \right)

where

C is the channel capacity in bits per second;
B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated signal);
S is the average received signal power over the bandwidth (in case of a modulated signal, often denoted C, i.e. modulated carrier), measured in watts (or volts squared);
N is the average noise or interference power over the bandwidth, measured in watts (or volts squared); and
S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the Gaussian noise interference expressed as a linear power ratio (not as logarithmic decibels).”

ii. Expansion joint. Also covered by Muller, this is important stuff that people don’t think about:

“An expansion joint or movement joint is an assembly designed to safely absorb the heat-induced expansion and contraction of various construction materials, to absorb vibration, to hold certain parts together, or to allow movement due to ground settlement or earthquakes. They are commonly found between sections of sidewalks, bridges, railway tracks, piping systems, ships, and other structures.

Throughout the year, building faces, concrete slabs, and pipelines will expand and contract due to the warming and cooling through seasonal variation, or due to other heat sources. Before expansion joint gaps were built into these structures, they would crack under the stress induced.”

If you have any kind of construction of a significant size/length, thermal expansion will cause problems unless you try to deal with it somehow. To use expansion joints to deal with this problem is another one of those hidden ‘good ideas’ people don’t think about, because they probably weren’t even aware there was a problem to be solved.

iii. Beaufort scale.

iv. Belle Gunness. Not all serial killers are/were male:

“Personal – comely widow who owns a large farm in one of the finest districts in La Porte County, Indiana, desires to make the acquaintance of a gentleman equally well provided, with view of joining fortunes. No replies by letter considered unless sender is willing to follow answer with personal visit. Triflers need not apply.[2]” […]

“The suitors kept coming, but none, except for Anderson, ever left the Gunness farm. By this time, she had begun ordering huge trunks to be delivered to her home. Hack driver Clyde Sturgis delivered many such trunks to her from La Porte and later remarked how the heavyset woman would lift these enormous trunks “like boxes of marshmallows”, tossing them onto her wide shoulders and carrying them into the house. She kept the shutters of her house closed day and night; farmers traveling past the dwelling at night saw her digging in the hog pen.” Guess what they found buried in the hog pen later?

v. English garden.

“The English garden, also called English landscape park (French: Jardin anglais, Italian: Giardino all’inglese, German: Englischer Landschaftsgarten, Portuguese: Jardim inglês), is a style of Landscape garden which emerged in England in the early 18th century, and spread across Europe, replacing the more formal, symmetrical Garden à la française of the 17th century as the principal gardening style of Europe.[1] The English garden presented an idealized view of nature. They were often inspired by paintings of landscapes by Claude Lorraine and Nicolas Poussin, and some were Influenced by the classic Chinese gardens of the East,[2] which had recently been described by European travelers.[2] The English garden usually included a lake, sweeps of gently rolling lawns set against groves of trees, and recreations of classical temples, Gothic ruins, bridges, and other picturesque architecture, designed to recreate an idyllic pastoral landscape. By the end of the 18th century the English garden was being imitated by the French landscape garden, and as far away as St. Petersburg, Russia, in Pavlovsk, the gardens of the future Emperor Paul. It also had a major influence on the form of the public parks and gardens which appeared around the world in the 19th century.[3]

A few images from the article (click to view full size):

vi. Aquifer.

“An aquifer is an underground layer of water-bearing permeable rock or unconsolidated materials (gravel, sand, or silt) from which groundwater can be usefully extracted using a water well. The study of water flow in aquifers and the characterization of aquifers is called hydrogeology. Related terms include aquitard, which is a bed of low permeability along an aquifer,[1] and aquiclude (or aquifuge), which is a solid, impermeable area underlying or overlying an aquifer. If the impermeable area overlies the aquifer pressure could cause it to become a confined aquifer.” The article has much more.

vii. Great Tit.

“The Great Tit (Parus major) is a passerine bird in the tit family Paridae. It is a widespread and common species throughout Europe, the Middle East, Central and Northern Asia, and parts of North Africa in any sort of woodland. It is generally resident, and most Great Tits do not migrate except in extremely harsh winters. Until 2005 this species was lumped with numerous other subspecies. DNA studies have shown these other subspecies to be distinctive from the Great Tit and these have now been separated as two separate species, the Cinereous Tit of southern Asia, and the Japanese Tit of East Asia. The Great Tit remains the most widespread species in the genus Parus.

The Great Tit is a distinctive bird, with a black head and neck, prominent white cheeks, olive upperparts and yellow underparts, with some variation amongst the numerous subspecies. It is predominantly insectivorous in the summer, but will consume a wider range of food items in the winter months, including small hibernating bats.[2] Like all tits it is a cavity nester, usually nesting in a hole in a tree. The female lays around 12 eggs and incubates them alone, although both parents raise the chicks. In most years the pair will raise two broods. The nests may be raided by woodpeckers, squirrels and weasels and infested with fleas, and adults may be hunted by Sparrowhawks. The Great Tit has adapted well to human changes in the environment and is a common and familiar bird in urban parks and gardens. The Great Tit is also an important study species in ornithology. […]

Great Tits combine dietary versatility with a considerable amount of intelligence and the ability to solve problems with insight learning, that is to solve a problem through insight rather than trial and error.[9] In England, Great Tits learned to break the foil caps of milk bottles delivered at the doorstep of homes to obtain the cream at the top.[24] This behaviour, first noted in 1921, spread rapidly in the next two decades.[25] In 2009, Great Tits were reported killing and eating pipistrelle bats. This is the first time a songbird has been seen to hunt bats. The tits only do this during winter when the bats are hibernating and other food is scarce.[26] They have also been recorded using tools, using a conifer needle in the bill to extract larvae from a hole in a tree.[9] […]

The Great Tit has generally adjusted to human modifications of the environment. It is more common and has better breeding success in areas with undisturbed forest cover, but it has adapted to human modified habitats. It can be very common in urban areas.[9] For example, the breeding population in the city of Sheffield (a city of half a million people) has been estimated at 17,164 individuals.[45] In adapting to human environments its song has been observed to change in noise-polluted urban environments. In areas with low frequency background noise pollution, the song has a higher frequency than in quieter areas.[46]

July 10, 2012 Posted by | Biology, Botany, Computer science, Geology, History, Wikipedia, Zoology | Leave a comment

Wikipedia articles of interest

i. Alternation of generations. It’s a bit technical, but I thought the article was interesting:

Alternation of generations (also known as alternation of phases or metagenesis) is a term primarily used to describe the life cycle of plants (taken here to mean the Archaeplastida). A multicellular sporophyte, which is diploid with 2N paired chromosomes (i.e. N pairs), alternates with a multicellular gametophyte, which is haploid with N unpaired chromosomes. A mature sporophyte produces spores by meiosis, a process which results in a reduction of the number of chromosomes by a half. Spores germinate and grow into a gametophyte. At maturity, the gametophyte produces gametes by mitosis, which does not alter the number of chromosomes. Two gametes (originating from different organisms of the same species or from the same organism) fuse to produce a zygote, which develops into a diploid sporophyte. This cycle, from sporophyte to sporophyte (or equally from gametophyte to gametophyte), is the way in which all land plants and many algae undergo sexual reproduction.

The relationship between the sporophyte and gametophyte varies among different groups of plants. In those algae which have alternation of generations, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte, being entirely dependent on it in the first two groups. By contrast, the fern gametophyte is less well developed than the sporophyte, forming a small flattened thallus. In flowering plants, the reduction of the gametophyte is even more extreme; it consists of just a few cells which grow entirely inside the sporophyte.

All animals develop differently. A mature animal is diploid and so is, in one sense, equivalent to a sporophyte. However, an animal directly produces haploid gametes by meiosis. No haploid spores capable of dividing are produced, so neither is a haploid gametophyte. There is no alternation between diploid and haploid forms. […] Life cycles, such as those of plants, with alternating haploid and diploid phases can be referred to as diplohaplontic (the equivalent terms haplodiplontic, diplobiontic or dibiontic are also in use). Life cycles, such as those of animals, in which there is only a diploid phase are referred to as diplontic. (Life cycles in which there is only a haploid phase are referred to as haplontic.)

ii. Lightning. Long article, lots of stuff and links:

Lightning is an atmospheric electrical discharge (spark) accompanied by thunder, usually associated with and produced by cumulonimbus clouds, but also occurring during volcanic eruptions or in dust storms.[1] From this discharge of atmospheric electricity, a leader of a bolt of lightning can travel at speeds of 220,000 km/h (140,000 mph), and can reach temperatures approaching 30,000 °C (54,000 °F), hot enough to fuse silica sand into glass channels known as fulgurites, which are normally hollow and can extend as much as several meters into the ground.[2][3]

There are some 16 million lightning storms in the world every year.[4] Lightning causes ionisation in the air through which it travels, leading to the formation of nitric oxide and ultimately, nitric acid, of benefit to plant life below.

Lightning can also occur within the ash clouds from volcanic eruptions,[5] or can be caused by violent forest fires which generate sufficient dust to create a static charge.[1][6]

How lightning initially forms is still a matter of debate.[7] Scientists have studied root causes ranging from atmospheric perturbations (wind, humidity, friction, and atmospheric pressure) to the impact of solar wind and accumulation of charged solar particles.[4] Ice inside a cloud is thought to be a key element in lightning development, and may cause a forcible separation of positive and negative charges within the cloud, thus assisting in the formation of lightning.[4]

The irrational fear of lightning (and thunder) is astraphobia. The study or science of lightning is called fulminology, and someone who studies lightning is referred to as a fulminologist.[8] [I had no idea there was a name for this!] […]

An old estimate of the frequency of lightning on Earth was 100 times a second. Now that there are satellites that can detect lightning, including in places where there is nobody to observe it, it is known to occur on average 44 ± 5 times a second, for a total of nearly 1.4 billion flashes per year;[101][102] 75% of these flashes are either cloud-to-cloud or intra-cloud and 25% are cloud-to-ground.[103]

Approximately 70% of lightning occurs in the tropics where the majority of thunderstorms occur. The place where lightning occurs most often (according to the data from 2004–2005) is near the small village of Kifuka in the mountains of eastern Democratic Republic of the Congo,[105] where the elevation is around 975 metres (3,200 ft). On average this region receives 158 lightning strikes per 1 square kilometer (0.39 sq mi) a year.[102]

Above the Catatumbo river, which feeds Lake Maracaibo in Venezuela, Catatumbo lightning flashes several times per minute, 140 to 160 nights per year, accounting for 25% of the world’s production of upper-atmospheric ozone. Singapore has one of the highest rates of lightning activity in the world.[106] The city of Teresina in northern Brazil has the third-highest rate of occurrences of lightning strikes in the world.”

iii. Dreadnought (featured).

“The dreadnought was the predominant type of battleship in the early 20th-century. The first of the kind, the Royal Navy‘s Dreadnought, had such an impact when launched in 1906 that similar battleships built after her were referred to as “dreadnoughts”, and earlier battleships became known as pre-dreadnoughts. Her design had two revolutionary features: an “all-big-gun” armament scheme and steam turbine propulsion. The arrival of the dreadnoughts renewed the naval arms race, principally between the United Kingdom and Germany but reflected worldwide, as the new class of warships became a crucial symbol of national power. […]

While dreadnought-building consumed vast resources in the early 20th century, there was only one battle between large dreadnought fleets. At the Battle of Jutland, the British and German navies clashed with no decisive result. The term “dreadnought” gradually dropped from use after World War I, especially after the Washington Naval Treaty, as all remaining battleships shared dreadnought characteristics; it can also be used to describe battlecruisers, the other type of ship resulting from the dreadnought revolution.[1] […]

The building of Dreadnought coincided with increasing tension between the United Kingdom and Germany. Germany had begun to build a large battlefleet in the 1890s, as part of a deliberate policy to challenge British naval supremacy. With the conclusion of the Entente Cordiale between the United Kingdom and France in April 1904, it became increasingly clear that the United Kingdom’s principal naval enemy would be Germany, which was building up a large, modern fleet under the ‘Tirpitz’ laws. This rivalry gave rise to the two largest dreadnought fleets of the pre-war period.[93]

The first German response to Dreadnought came with the Nassau class, laid down in 1907. This was followed by the Helgoland class in 1909. Together with two battlecruisers—a type for which the Germans had less admiration than Fisher, but which could be built under authorization for armored cruisers, rather than capital ships—these classes gave Germany a total of ten modern capital ships built or building in 1909. While the British ships were somewhat faster and more powerful than their German equivalents, a 12:10 ratio fell far short of the 2:1 ratio that the Royal Navy wanted to maintain.[94]

In 1909, the British Parliament authorized an additional four capital ships, holding out hope Germany would be willing to negotiate a treaty about battleship numbers. If no such solution could be found, an additional four ships would be laid down in 1910. Even this compromise solution meant (when taken together with some social reforms) raising taxes enough to prompt a constitutional crisis in the United Kingdom in 1909–10. In 1910, the British eight-ship construction plan went ahead, including four Orion (1910)-class super-dreadnoughts, and augmented by battlecruisers purchased by Australia and New Zealand. In the same period of time, Germany laid down only three ships, giving the United Kingdom a superiority of 22 ships to 13. […]

The dreadnought race stepped up in 1910 and 1911, with Germany laying down four capital ships each year and the United Kingdom five. Tension came to a head following the German Naval Law of 1912. This proposed a fleet of 33 German battleships and battlecruisers, outnumbering the Royal Navy in home waters. To make matters worse for the United Kingdom, the Imperial Austro-Hungarian Navy was building four dreadnoughts, while the Italians had four and were building two more. Against such threats, the Royal Navy could no longer guarantee vital British interests. The United Kingdom was faced with a choice of building more battleships, withdrawing from the Mediterranean, or seeking an alliance with France. Further naval construction was unacceptably expensive at a time when social welfare provision was making calls on the budget. Withdrawing from the Mediterranean would mean a huge loss of influence, weakening British diplomacy in the Mediterranean and shaking the stability of the British Empire. The only acceptable option, and the one recommended by First Lord of the Admiralty Winston Churchill, was to break with the policies of the past and make an arrangement with France. The French would assume responsibility for checking Italy and Austria-Hungary in the Mediterranean, while the British would protect the north coast of France. In spite of some opposition from British politicians, the Royal Navy organised itself on this basis in 1912.[96]

In spite of these important strategic consequences, the 1912 Naval Law had little bearing on the battleship force ratios. The United Kingdom responded by laying down ten new super-dreadnoughts in her 1912 and 1913 budgets—ships of the Queen Elizabeth and Revenge classes, which introduced a further step change in armament, speed and protection—while Germany laid down only five, focusing resources on the Army.[97]

iv. Travelling salesman problem.

“The travelling salesman problem (TSP) is an NP-hard problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their pairwise distances, the task is to find the shortest possible route that visits each city exactly once and returns to the origin city. It is a special case of the travelling purchaser problem.

The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, a large number of heuristics and exact methods are known, so that some instances with tens of thousands of cities can be solved.

The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder. […]

The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute force search). The running time for this approach lies within a polynomial factor of O(n!), the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest applications of dynamic programming is the Held–Karp algorithm that solves the problem in time O(n22n).[13] […]

An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University (see the Princeton external link). The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometers was found and it was proven that no shorter tour exists.[16]

In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years […]

Various heuristics and approximation algorithms, which quickly yield good solutions have been devised. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution.”

v. Ediacara biota (featured).

“The Ediacara (play /ˌdiˈækərə/; formerly Vendian) biota consisted of enigmatic tubular and frond-shaped, mostly sessile organisms which lived during the Ediacaran Period (ca. 635–542 Ma). Trace fossils of these organisms have been found worldwide, and represent the earliest known complex multicellular organisms.[note 1] The Ediacara biota radiated in an event called the Avalon Explosion, 575 million years ago,[1][2] after the Earth had thawed from the Cryogenian period’s extensive glaciation, and largely disappeared contemporaneously with the rapid appearance of biodiversity known as the Cambrian explosion. Most of the currently existing body-plans of animals first appeared only in the fossil record of the Cambrian rather than the Ediacaran. For macroorganisms, the Cambrian biota completely replaced the organisms that populated the Ediacaran fossil record.

The organisms of the Ediacaran Period first appeared around 585 million years ago and flourished until the cusp of the Cambrian 542 million years ago when the characteristic communities of fossils vanished. The earliest reasonably diverse Ediacaran community was discovered in 1995 in Sonora, Mexico, and is approximately 585 million years in age, roughly synchronous with the Gaskiers glaciation.[3] While rare fossils that may represent survivors have been found as late as the Middle Cambrian (510 to 500 million years ago) the earlier fossil communities disappear from the record at the end of the Ediacaran leaving only curious fragments of once-thriving ecosystems.[4] Multiple hypotheses exist to explain the disappearance of this biota, including preservation bias, a changing environment, the advent of predators and competition from other life-forms.

Determining where Ediacaran organisms fit in the tree of life has proven challenging; it is not even established that they were animals, with suggestions that they were lichens (fungus-alga symbionts), algae, protists known as foraminifera, fungi or microbial colonies, to hypothetical intermediates between plants and animals. The morphology and habit of some taxa (e.g. Funisia dorothea) suggest relationships to Porifera or Cnidaria.[5] Kimberella may show a similarity to molluscs, and other organisms have been thought to possess bilateral symmetry, although this is controversial. Most macroscopic fossils are morphologically distinct from later life-forms: they resemble discs, tubes, mud-filled bags or quilted mattresses. Due to the difficulty of deducing evolutionary relationships among these organisms some paleontologists have suggested that these represent completely extinct lineages that do not resemble any living organism. One paleontologist proposed a separate kingdom level category Vendozoa (now renamed Vendobionta)[6] in the Linnaean hierarchy for the Ediacaran biota. If these enigmatic organisms left no descendants their strange forms might be seen as a “failed experiment” in multicellular life with later multicellular life independently evolving from unrelated single-celled organisms.[7]

Terms like ‘may have’ (9), ‘perhaps’ (3) and ‘probably’ (3) are abundant in the article, but think about how long time ago this was. I think it’s frankly just incredibly awesome that we even know anything at all.

vi. Three Emperors Dinner. Yes, there’s a Wikipedia article about a dinner that happened more than 100 years ago. Wikipedia is awesome!

“The Dîner des trois empereurs or Three Emperors Dinner was a banquet held at Café Anglais in Paris, France on 7 June 1867.

It was prepared by chef Adolphe Dugléré at the request of King William I of Prussia who frequented the cafe during the Exposition Universelle. He requested a meal to be remembered and at which no expense was to be spared for himself and his guests, Tsar Alexander II of Russia, plus his son the tsarevitch (who later became Tsar Alexander III), and Prince Otto von Bismarck. The cellar master, Claudius Burdel, was instructed to accompany the dishes with the greatest wines in the world, including a Roederer champagne in a special lead glass bottle, so Tsar Alexander could admire the bubbles and golden colour.[1]

The banquet consisted of 16 courses with eight wines served over eight hours. The cost of the meal was 400 francs per person[2] (about 8,800 in 2012 prices).”

June 17, 2012 Posted by | Biology, Botany, Computer science, Evolutionary biology, Genetics, History, Mathematics, Paleontology, Physics, Wikipedia | Leave a comment

Random wikipedia links of interest

1. Orogeny.

‘Before the development of geologic concepts during the 19th century, the presence of mountains was explained in Christian contexts as a result of the Biblical Deluge. This was an extension of Neoplatonic thought, which influenced early Christian writers and assumed that a perfect Creation would have to have taken the form of a perfect sphere. Such thinking persisted into the 18th century.’

Of course this could just be the confirmation bias talking, but I think the ‘religion makes you more stupid and less knowledgeable’-hypothesis gets yet another point here.

2) Coalworker’s pneumoconiosis – ‘a common affliction of coal miners and others who work with coal, similar to both silicosis from inhaling silica dust, and to the long-term effects of tobacco smoking. Inhaled coal dust progressively builds up in the lungs and is unable to be removed by the body; that leads to inflammation, fibrosis, and in the worst case, necrosis.’

3) Hand grenade. Did you know that a gunpowder version of this weapon (Zhen Tian Lei – that article is only a stub, unfortunately) was developed more than 1000 years ago – I most certainly did not.

4) Gene expression. This is a dangerous article, it has a lot of good links and can cost you many hours of your life if you’re not careful. As I’m sure regular readers would know, the name of the article is of course also the name of one of my favourite blogs.

5) Simpson’s paradox.

6) Noisy-channel coding theorem.

‘the noisy-channel coding theorem establishes that however contaminated with noise interference a communication channel may be, it is possible to communicate digital data (information) nearly error-free up to a given maximum rate through the channel.’

[…]

‘Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The theory doesn’t describe how to construct the error-correcting method, it only tells us how good the best possible method can be. Shannon’s theorem has wide-ranging applications in both communications and data storage. This theorem is of foundational importance to the modern field of information theory. Shannon only gave an outline of the proof. The first rigorous proof is due to Amiel Feinstein in 1954.

The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R < C there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C.’

I’d file this one under ‘stuff I didn’t know I didn’t know’. There’s a lot of that stuff around.

July 3, 2010 Posted by | Computer science, Genetics, Geology, Medicine, Statistics, Wikipedia | Leave a comment