Econstudentlog

Interactive Coding with “Optimal” Round and Communication Blowup

The youtube description of this one was rather longer than usual, and I decided to quote it in full below:

“The problem of constructing error-resilient interactive protocols was introduced in the seminal works of Schulman (FOCS 1992, STOC 1993). These works show how to convert any two-party interactive protocol into one that is resilient to constant-fraction of error, while blowing up the communication by only a constant factor. Since these seminal works, there have been many follow-up works which improve the error rate, the communication rate, and the computational efficiency. All these works assume that in the underlying protocol, in each round, each party sends a *single* bit. This assumption is without loss of generality, since one can efficiently convert any protocol into one which sends one bit per round. However, this conversion may cause a substantial increase in *round* complexity, which is what we wish to minimize in this work. Moreover, all previous works assume that the communication complexity of the underlying protocol is *fixed* and a priori known, an assumption that we wish to remove. In this work, we consider protocols whose messages may be of *arbitrary* lengths, and where the length of each message and the length of the protocol may be *adaptive*, and may depend on the private inputs of the parties and on previous communication. We show how to efficiently convert any such protocol into another protocol with comparable efficiency guarantees, that is resilient to constant fraction of adversarial error, while blowing up both the *communication* complexity and the *round* complexity by at most a constant factor. Moreover, as opposed to most previous work, our error model not only allows the adversary to toggle with the corrupted bits, but also allows the adversary to *insert* and *delete* bits. In addition, our transformation preserves the computational efficiency of the protocol. Finally, we try to minimize the blowup parameters, and give evidence that our parameters are nearly optimal. This is joint work with Klim Efremenko and Elad Haramaty.”

A few links to stuff covered/mentioned in the lecture:

Coding for interactive communication correcting insertions and deletions.
Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes.
Common reference string model.
Small-bias probability spaces: Efficient constructions and applications.
Interactive Channel Capacity Revisited.
Collision (computer science).
Chernoff bound.

Advertisements

September 6, 2017 Posted by | Computer science, Cryptography, Lectures, Mathematics | Leave a comment

Light

I gave the book two stars. Some quotes and links below.

“Lenses are ubiquitous in image-forming devices […] Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. […] In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its ‘ray-bending’ power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope. Because the refractive index of the lens material usually depends on the colour of light, rays of different colours are bent by different amounts at the surface, leading to a focus for each colour occurring in a different position. […] lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. […] about the best you can do in any lens system you could actually make is an image size of approximately one wavelength. This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars. […] Much more sophisticated methods are required to see even smaller things. The reason is that the wave nature of light puts a lower limit on the size of a spot of light. […] At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components […]. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror”.

“[W]hat sort of wave is light? This was […] answered in the 19th century by James Clerk Maxwell, who showed that it is an oscillation of a new kind of entity: the electromagnetic field. This field is effectively a force that acts on electric charges and magnetic materials. […] In the early 19th century, Michael Faraday had shown the close connections between electric and magnetic fields. Maxwell brought them together, as the electromagnetic force field. […] in the wave model, light can be considered as very high frequency oscillations of the electromagnetic field. One consequence of this idea is that moving electric charges can generate light waves. […] When […] charges accelerate — that is, when they change their speed or their direction of motion — then a simple law of physics is that they emit light. Understanding this was one of the great achievements of the theory of electromagnetism.”

“It was the observation of interference effects in a famous experiment by Thomas Young in 1803 that really put the wave picture of light as the leading candidate as an explanation of the nature of light. […] It is interference of light waves that causes the colours in a thin film of oil floating on water. Interference transforms very small distances, on the order of the wavelength of light, into very big changes in light intensity — from no light to four times as bright as the individual constituent waves. Such changes in intensity are easy to detect or see, and thus interference is a very good way to measure small changes in displacement on the scale of the wavelength of light. Many optical sensors are based on interference effects.”

“[L]ight beams […] gradually diverge as they propagate. This is because a beam of light, which by definition has a limited spatial extent, must be made up of waves that propagate in more than one direction. […] This phenomenon is called diffraction. […] if you want to transmit light over long distances, then diffraction could be a problem. It will cause the energy in the light beam to spread out, so that you would need a bigger and bigger optical system and detector to capture all of it. This is important for telecommunications, since nearly all of the information transmitted over long-distance communications links is encoded on to light beams. […] The means to manage diffraction so that long-distance communication is possible is to use wave guides, such as optical fibres.”

“[O]ptical waves […] guided along a fibre or in a glass ‘chip’ […] underpins the long-distance telecommunications infrastructure that connects people across different continents and powers the Internet. The reason it is so effective is that light-based communications have much more capacity for carrying information than do electrical wires, or even microwave cellular networks. […] In optical communications, […] bits are represented by the intensity of the light beam — typically low intensity is a 0 and higher intensity a 1. The more of these that arrive per second, the faster the communication rate. […] Why is optics so good for communications? There are two reasons. First, light beams don’t easily influence each other, so that a single fibre can support many light pulses (usually of different colours) simultaneously without the messages getting scrambled up. The reason for this is that the glass of which the fibre is made does not absorb light (or only absorbs it in tiny amounts), and so does not heat up and disrupt other pulse trains. […] the ‘crosstalk’ between light beams is very weak in most materials, so that many beams can be present at once without causing a degradation of the signal. This is very different from electrons moving down a copper wire, which is the usual way in which local ‘wired’ communications links function. Electrons tend to heat up the wire, dissipating their energy. This makes the signals harder to receive, and thus the number of different signal channels has to be kept small enough to avoid this problem. Second, light waves oscillate at very high frequencies, and this allows very short pulses to be generated This means that the pulses can be spaced very close together in time, making the transmission of more bits of information per second possible. […] Fibre-based optical networks can also support a very wide range of colours of light.”

“Waves can be defined by their wavelength, amplitude, and phase […]. Particles are defined by their position and direction of travel […], and a collection of particles by their density […] and range of directions. The media in which the light moves are characterized by their refractive indices. This can vary across space. […] Hamilton showed that what was important was how rapidly the refractive index changed in space compared with the length of an optical wave. That is, if the changes in index took place on a scale of close to a wavelength, then the wave character of light was evident. If it varied more smoothly and very slowly in space then the particle picture provided an adequate description. He showed how the simpler ray picture emerges from the more complex wave picture in certain commonly encountered situations. The appearance of wave-like phenomena, such as diffraction and interference, occurs when the size scales of the wavelength of light and the structures in which it propagates are similar. […] Particle-like behaviour — motion along a well-defined trajectory — is sufficient to describe the situation when all objects are much bigger than the wavelength of light, and have no sharp edges.”

“When things are heated up, they change colour. Take a lump of metal. As it gets hotter and hotter it first glows red, then orange, and then white. Why does this happen? This question stumped many of the great scientists [in the 19th century], including Maxwell himself. The problem was that Maxwell’s theory of light, when applied to this problem, indicated that the colour should get bluer and bluer as the temperature increased, without a limit, eventually moving out of the range of human vision into the ultraviolet—beyond blue—region of the spectrum. But this does not happen in practice. […] Max Planck […] came up with an idea to explain the spectrum emitted by hot objects — so-called ‘black bodies’. He conjectured that when light and matter interact, they do so only by exchanging discrete ‘packets’, or quanta, or energy. […] this conjecture was set to radically change physics.”

“What Dirac did was to develop a quantum mechanical version of Maxwell’s theory of electromagnetic fields. […] It set the quantum field up as the fundamental entity on which the universe is built — neither particle nor wave, but both at once; complete wave–particle duality. It is a beautiful reconciliation of all the phenomena that light exhibits, and provides a framework in which to understand all optical effects, both those from the classical world of Newton, Maxwell, and Hamilton and those of the quantum world of Planck, Einstein, and Bohr. […] Light acts as a particle of more or less well-defined energy when it interacts with matter. Yet it retains its ability to exhibit wave-like phenomena at the same time. The resolution [was] a new concept: the quantum field. Light particles — photons — are excitations of this field, which propagates according to quantum versions of Maxwell’s equations for light waves. Quantum fields, of which light is perhaps the simplest example, are now regarded as being the fundamental entities of the universe, underpinning all types of material and non-material things. The only explanation is that the stuff of the world is neither particle nor wave but both. This is the nature of reality.”

Some links:

Light.
Optics.
Watt.
Irradiance.
Coherence (physics).
Electromagnetic spectrum.
Joseph von Fraunhofer.
Spectroscopy.
Wave.
Transverse wave.
Wavelength.
Spatial frequency.
Polarization (waves).
Specular reflection.
Negative-index metamaterial.
Birefringence.
Interference (wave propagation).
Diffraction.
Young’s interference experiment.
Holography.
Photoactivated localization microscopy.
Stimulated emission depletion (STED) microscopy.
Fourier’s theorem (I found it hard to find a good source on this one. According to the book, “Fourier’s theorem says in simple terms that the smaller you focus light, the broader the range of wave directions you need to achieve this spot”)
X-ray diffraction.
Brewster’s angle.
Liquid crystal.
Liquid crystal display.
Wave–particle duality.
Fermat’s principle.
Wavefront.
Maupertuis’ principle.
Johann Jakob Balmer.
Max Planck.
Photoelectric effect.
Niels Bohr.
Matter wave.
Quantum vacuum.
Lamb shift.
Light-emitting diode.
Fluorescent tube.
Synchrotron radiation.
Quantum state.
Quantum fluctuation.
Spontaneous emission/stimulated emission.
Photodetector.
Laser.
Optical cavity.
X-ray absorption spectroscopy.
Diamond Light Source.
Mode-locking.
Stroboscope.
Femtochemistry.
Spacetime.
Atomic clock.
Time dilation.
High harmonic generation.
Frequency comb.
Optical tweezers.
Bose–Einstein condensate.
Pump probe spectroscopy.
Vulcan laser.
Plasma (physics).
Nonclassical light.
Photon polarization.
Quantum entanglement.
Bell test experiments.
Quantum key distribution/Quantum cryptography/Quantum computing.

August 31, 2017 Posted by | Books, Chemistry, Computer science, Physics | Leave a comment

Magnetism

This book was ‘okay…ish’, but I must admit I was a bit disappointed; the coverage was much too superficial, and I’m reasonably sure the lack of formalism made the coverage harder for me to follow than it could have been. I gave the book two stars on goodreads.

Some quotes and links below.

Quotes:

“In the 19th century, the principles were established on which the modern electromagnetic world could be built. The electrical turbine is the industrialized embodiment of Faraday’s idea of producing electricity by rotating magnets. The turbine can be driven by the wind or by falling water in hydroelectric power stations; it can be powered by steam which is itself produced by boiling water using the heat produced from nuclear fission or burning coal or gas. Whatever the method, rotating magnets inducing currents feed the appetite of the world’s cities for electricity, lighting our streets, powering our televisions and computers, and providing us with an abundant source of energy. […] rotating magnets are the engine of the modern world. […] Modern society is built on the widespread availability of cheap electrical power, and almost all of it comes from magnets whirling around in turbines, producing electric current by the laws discovered by Oersted, Ampère, and Faraday.”

“Maxwell was the first person to really understand that a beam of light consists of electric and magnetic oscillations propagating together. The electric oscillation is in one plane, at right angles to the magnetic oscillation. Both of them are in directions at right angles to the direction of propagation. […] The oscillations of electricity and magnetism in a beam of light are governed by Maxwell’s four beautiful equations […] Above all, Einstein’s work on relativity was motivated by a desire to preserve the integrity of Maxwell’s equations at all costs. The problem was this: Maxwell had derived a beautiful expression for the speed of light, but the speed of light with respect to whom? […] Einstein deduced that the way to fix this would be to say that all observers will measure the speed of any beam of light to be the same. […] Einstein showed that magnetism is a purely relativistic effect, something that wouldn’t even be there without relativity. Magnetism is an example of relativity in everyday life. […] Magnetic fields are what electric fields look like when you are moving with respect to the charges that ‘cause’ them. […] every time a magnetic field appears in nature, it is because a charge is moving with respect to the observer. Charge flows down a wire to make an electric current and this produces magnetic field. Electrons orbit an atom and this ‘orbital’ motion produces a magnetic field. […] the magnetism of the Earth is due to electrical currents deep inside the planet. Motion is the key in each and every case, and magnetic fields are the evidence that charge is on the move. […] Einstein’s theory of relativity casts magnetism in a new light. Magnetic fields are a relativistic correction which you observe when charges move relative to you.”

“[T]he Bohr–van Leeuwen theorem […] states that if you assume nothing more than classical physics, and then go on to model a material as a system of electrical charges, then you can show that the system can have no net magnetization; in other words, it will not be magnetic. Simply put, there are no lodestones in a purely classical Universe. This should have been a revolutionary and astonishing result, but it wasn’t, principally because it came about 20 years too late to knock everyone’s socks off. By 1921, the initial premise of the Bohr–van Leeuwen theorem, the correctness of classical physics, was known to be wrong […] But when you think about it now, the Bohr–van Leeuwen theorem gives an extraordinary demonstration of the failure of classical physics. Just by sticking a magnet to the door of your refrigerator, you have demonstrated that the Universe is not governed by classical physics.”

“[M]ost real substances are weakly diamagnetic, meaning that when placed in a magnetic field they become weakly magnetic in the opposite direction to the field. Water does this, and since animals are mostly water, it applies to them. This is the basis of Andre Geim’s levitating frog experiment: a live frog is placed in a strong magnetic field and because of its diamagnetism it becomes weakly magnetic. In the experiment, a non-uniformity of the magnetic field induces a force on the frog’s induced magnetism and, hey presto, the frog levitates in mid-air.”

“In a conventional hard disk technology, the disk needs to be spun very fast, around 7,000 revolutions per minute. […] The read head floats on a cushion of air about 15 nanometres […] above the surface of the rotating disk, reading bits off the disk at tens of megabytes per second. This is an extraordinary engineering achievement when you think about it. If you were to scale up a hard disk so that the disk is a few kilometres in diameter rather a few centimetres, then the read head would be around the size of the White House and would be floating over the surface of the disk on a cushion of air one millimetre thick (the diameter of the head of a pin) while the disk rotated below it at a speed of several million miles per hour (fast enough to go round the equator a couple of dozen times in a second). On this scale, the bits would be spaced a few centimetres apart around each track. Hard disk drives are remarkable. […] Although hard disks store an astonishing amount of information and are cheap to manufacture, they are not fast information retrieval systems. To access a particular piece of information involves moving the head and rotating the disk to a particular spot, taking perhaps a few milliseconds. This sounds quite rapid, but with processors buzzing away and performing operations every nanosecond or so, a few milliseconds is glacial in comparison. For this reason, modern computers often use solid state memory to store temporary information, reserving the hard disk for longer-term bulk storage. However, there is a trade-off between cost and performance.”

“In general, there is a strong economic drive to store more and more information in a smaller and smaller space, and hence a need to find a way to make smaller and smaller bits. […] [However] greater miniturization comes at a price. The point is the following: when you try to store a bit of information in a magnetic medium, an important constraint on the usefulness of the technology is how long the information will last for. Almost always the information is being stored at room temperature and so needs to be robust to the ever present random jiggling effects produced by temperature […] It turns out that the crucial parameter controlling this robustness is the ratio of the energy needed to reverse the bit of information (in other words, the energy required to change the magnetization from one direction to the reverse direction) to a characteristic energy associated with room temperature (an energy which is, expressed in electrical units, approximately one-fortieth of a Volt). So if the energy to flip a magnetic bit is very large, the information can persist for thousands of years […] while if it is very small, the information might only last for a small fraction of a second […] This energy is proportional to the volume of the magnetic bit, and so one immediately sees a problem with making bits smaller and smaller: though you can store bits of information at higher density, there is a very real possibility that the information might be very rapidly scrambled by thermal fluctuations. This motivates the search for materials in which it is very hard to flip the magnetization from one state to the other.”

“The change in the Earth’s magnetic field over time is a fairly noticeable phenomenon. Every decade or so, compass needles in Africa are shifting by a degree, and the magnetic field overall on planet Earth is about 10% weaker than it was in the 19th century.”

Below I have added some links to topics and people covered/mentioned in the book. Many of the links below have likely also been included in some of the other posts about books from the A Brief Introduction OUP physics series which I’ve posted this year – the main point of adding these links is to give some idea what kind of stuff’s covered in the book:

Magnetism.
Magnetite.
Lodestone.
William Gilbert/De Magnete.
Alessandro Volta.
Ampère’s circuital law.
Charles-Augustin de Coulomb.
Hans Christian Ørsted.
Leyden jar
/voltaic cell/battery (electricity).
Solenoid.
Electromagnet.
Homopolar motor.
Michael Faraday.
Electromagnetic induction.
Dynamo.
Zeeman effect.
Alternating current/Direct current.
Nikola Tesla.
Thomas Edison.
Force field (physics).
Ole Rømer.
Centimetre–gram–second system of units.
James Clerk Maxwell.
Maxwell’s equations.
Permittivity.
Permeability (electromagnetism).
Gauss’ law.
Michelson–Morley experiment
.
Special relativity.
Drift velocity.
Curie’s law.
Curie temperature.
Andre Geim.
Diamagnetism.
Paramagnetism.
Exchange interaction.
Magnetic domain.
Domain wall (magnetism).
Stern–Gerlach experiment.
Dirac equation.
Giant magnetoresistance.
Spin valve.
Racetrack memory.
Perpendicular recording.
Bubble memory (“an example of a brilliant idea which never quite made it”, as the author puts it).
Single-molecule magnet.
Spintronics.
Earth’s magnetic field.
Aurora.
Van Allen radiation belt.
South Atlantic Anomaly.
Geomagnetic storm.
Geomagnetic reversal.
Magnetar.
ITER (‘International Thermonuclear Experimental Reactor’).
Antiferromagnetism.
Spin glass.
Quantum spin liquid.
Multiferroics.
Spin ice.
Magnetic monopole.
Ice rules.

August 28, 2017 Posted by | Books, Computer science, Geology, Physics | Leave a comment

Computer Science

I have enjoyed the physics books I’ve recently read in the ‘…A very short introduction’-series by Oxford University Press, so I figured it might make sense to investigate whether the series also has some decent coverage of other areas of research. I must however admit that I didn’t think too much of Dasgupta’s book. I think the author was given a very tough task. Having an author write a decent short book on a reasonably well-defined sub-topic of physics makes sense, whereas having him write the same sort of short and decent book about the entire field of ‘physics’ is a different matter. In some sense something analogous to this was what Dasgupta had been asked to do(/had undertaken to do?). Of course computer science is a relatively new field so arguably the analogy doesn’t completely hold; even if you cover every major topic in computer science there might still be significantly less ground to cover here than there would be, had he been required to cover everything from Newton (…Copernicus? Eudoxus of Cnidus? Gan De?) to modern developments in M-theory, but the main point stands; the field is much too large for a book like this to do more than perhaps very carefully scratch the surfaces of a few relevant subfields, making the author’s entire endeavour exceedingly difficult to pull off successfully. I noted while reading the book that document searches for ‘graph theory’ and ‘discrete mathematics’ yielded zero results, and I assume that many major topics/areas of relevance are simply not mentioned at all, which to be fair is but to be expected considering the format of the book. The book could have been a lot worse, but it wasn’t all that great – I ended up giving it two stars on goodreads.

My coverage of the book here on the blog will be relatively lazy: I’ll only include links in this post, not quotes from the book – I looked up a lot of links to coverage of relevant concepts and topics also covered in the book while reading it, and I have added many of these links below. The links should give you some idea of which sort of topics are covered in the publication.

Church–Turing thesis.
Turing machine.
Automata theory.
Algorithm.
Donald Knuth.
Procedural knowledge.
Machine code.
Infix notation.
Polish notation.
Time complexity.
Linear search.
Big O notation.
Computational complexity theory.
P versus NP problem.
NP-completeness.
Programming language.
Assembly language.
Hardware description language.
Data type (computer science).
Statement (computer science).
Instruction cycle.
Assignment (computer science).
Computer architecture.
Control unit.
Computer memory.
Memory buffer register.
Cache (computing).
Parallel computing (featured article).
Instruction pipelining.
Amdahl’s law.
FCFS algorithm.
Exact algorithm.
Artificial intelligence.
Means-ends analysis.

June 8, 2017 Posted by | Books, Computer science | Leave a comment

The Mathematical Challenge of Large Networks

This is another one of the aforementioned lectures I watched a while ago, but had never got around to blogging:

If I had to watch this one again, I’d probably skip most of the second half; it contains highly technical coverage of topics in graph theory, and it was very difficult for me to follow (but I did watch it to the end, just out of curiosity).

The lecturer has put up a ~500 page publication on these and related topics, which is available here, so if you want to know more that’s an obvious place to go have a look. A few other relevant links to stuff mentioned/covered in the lecture:
Szemerédi regularity lemma.
Graphon.
Turán’s theorem.
Quantum graph.

May 19, 2017 Posted by | Computer science, Lectures, Mathematics, Statistics | Leave a comment

Quantifying tradeoffs between fairness and accuracy in online learning

From a brief skim of this paper, which is coauthored by the guy giving this lecture, it looked to me like it covers many of the topics discussed in the lecture. So if you’re unsure as to whether or not to watch the lecture (…or if you want to know more about this stuff after you’ve watched the lecture) you might want to have a look at that paper. Although the video is long for a single lecture I would note that the lecture itself lasts only approximately one hour; the last 10 minutes are devoted to Q&A.

May 12, 2017 Posted by | Computer science, Economics, Lectures, Mathematics | Leave a comment

Information complexity and applications

I have previously here on the blog posted multiple lectures in my ‘lecture-posts’, or I have combined a lecture with other stuff (e.g. links such as those in the previous ‘random stuff’ post). I think such approaches have made me less likely to post lectures on the blog (if I don’t post a lecture soon after I’ve watched it, my experience tells me that I not infrequently simply never get around to posting it), and combined with this issue is also the issue that I don’t really watch a lot of lectures these days. For these reasons I have decided to start posting single lecture posts here on the blog; when I start thinking about the time expenditure of people reading along here in a way this approach actually also seems justified – although it might take me as much time/work to watch and cover, say, 4 lectures as it would take me to read and cover 100 pages of a textbook, the time expenditure required by a reader of the blog would be very different in those two cases (you’ll usually be able to read a post that took me multiple hours to write in a short amount of time, whereas ‘the time advantage’ of the reader is close to negligible (maybe not completely; search costs are not completely irrelevant) in the case of lectures). By posting multiple lectures in the same post I probably decrease the expected value of the time readers spend watching the content I upload, which seems suboptimal.

Here’s the youtube description of the lecture, which was posted a few days ago on the IAS youtube account:

“Over the past two decades, information theory has reemerged within computational complexity theory as a mathematical tool for obtaining unconditional lower bounds in a number of models, including streaming algorithms, data structures, and communication complexity. Many of these applications can be systematized and extended via the study of information complexity – which treats information revealed or transmitted as the resource to be conserved. In this overview talk we will discuss the two-party information complexity and its properties – and the interactive analogues of classical source coding theorems. We will then discuss applications to exact communication complexity bounds, hardness amplification, and quantum communication complexity.”

He actually decided to skip the quantum communication complexity stuff because of the time constraint. I should note that the lecture was ‘easy enough’ for me to follow most of it, so it is not really that difficult, at least not if you know some basic information theory.

A few links to related stuff (you can take these links as indications of what sort of stuff the lecture is about/discusses, if you’re on the fence about whether or not to watch it):
Computational complexity theory.
Shannon entropy.
Shannon’s source coding theorem.
Communication complexity.
Communications protocol.
Information-based complexity.
Hash function.
From Information to Exact Communication (in the lecture he discusses some aspects covered in this paper).
Unique games conjecture (Two-prover proof systems).
A Counterexample to Strong Parallel Repetition (another paper mentioned/briefly discussed during the lecture).
Pinsker’s inequality.

An interesting aspect I once again noted during this lecture is the sort of loose linkage you sometimes observe between the topics of game theory/microeconomics and computer science. Of course the link is made explicit a few minutes later in the talk when he discusses the unique games conjecture to which I link above, but it’s perhaps worth noting that the link is on display even before that point is reached. Around 38 minutes into the lecture he mentions that one of the relevant proofs ‘involves such things as Lagrange multipliers and optimization’. I was far from surprised, as from a certain point of view the problem he discusses at that point is conceptually very similar to some problems encountered in auction theory, where Lagrange multipliers and optimization problems are frequently encountered… If you are too unfamiliar with that field to realize how the similar problem might appear in an auction theory context, what you have there are instead auction partipants who prefer not to reveal their true willingness to pay; and some auction designs actually work in a very similar manner as does the (pseudo-)protocol described in the lecture, and are thus used to reveal it (for some subset of participants at least)).

March 12, 2017 Posted by | Computer science, Game theory, Lectures, Papers | Leave a comment

Random stuff

i. A very long but entertaining chess stream by Peter Svidler was recently uploaded on the Chess24 youtube account – go watch it here, if you like that kind of stuff. The fact that it’s five hours long is a reason to rejoice, not a reason to think that it’s ‘too long to be watchable’ – watch it in segments…

People interested in chess might also be interested to know that Magnus Carlsen has made an account on the ICC on which he has played, which was a result of his recent participation in the ICC Open 2016 (link). A requirement for participation in the tournament was that people had to know whom they were playing against (so there would be no ultra-strong GMs playing using anonymous accounts in the finals – they could use accounts with strange names, but people had to know whom they were playing), so now we know that Magnus Carlsen has played under the nick ‘stoptryharding’ on the ICC. Carlsen did not win the tournament as he lost to Grischuk in the semi-finals. Some very strong players were incidentally kicked out in the qualifiers, including Nepomniachtchi, the current #5 in the world on the FIDE live blitz ratings.

ii. A lecture:

iii. Below I have added some new words I’ve encountered, most of them in books I’ve read (I have not spent much time on vocabulary.com recently). I’m sure if I were to look all of them up on vocabulary.com some (many?) of them would not be ‘new’ to me, but that’s not going to stop me from including them here (I included the word ‘inculcate’ below for a reason…). Do take note of the spelling of some of these words – some of them are tricky ones included in Bryson’s Dictionary of Troublesome Words: A Writer’s Guide to Getting It Right, which people often get wrong for one reason or another:

Conurbation, epizootic, equable, circumvallation, contravallation, exiguous, forbear, louche, vituperative, thitherto, congeries, inculcate, obtrude, palter, idiolect, hortatory, enthalpy (see also wiki, or Khan Academy), trove, composograph, indite, mugginess, apodosis, protasis, invidious, inveigle, inflorescence, kith, anatopism, laudation, luxuriant, maleficence, misogamy (I did not know this was a word, and I’ll definitely try to remember it/that it is…), obsolescent, delible, overweening, parlay (this word probably does not mean what you think it means…), perspicacity, perspicuity, temblor, precipitous, quinquennial, razzmatazz, turpitude, vicissitude, vitriform.

iv. Some quotes from this excellent book review, by Razib Khan:

“relatively old-fashioned anti-religious sentiments […] are socially acceptable among American Left-liberals so long as their targets are white Christians (“punching up”) but more “problematic” and perhaps even “Islamophobic” when the invective is hurled at Muslim “people of color” (all Muslims here being tacitly racialized as nonwhite). […] Muslims, as marginalized people, are now considered part of a broader coalition on the progressive Left. […] most Left-liberals who might fall back on the term Islamophobia, don’t actually take Islam, or religion generally, seriously. This explains the rapid and strident recourse toward a racial analogy for Islamic identity, as that is a framework that modern Left-liberals and progressives have internalized and mastered. The problem with this is that Islam is not a racial or ethnic identity, it is a set of beliefs and practices. Being a Muslim is not about being who you are in a passive sense, but it is a proactive expression of a set of ideas about the world and your behavior within the world. This category error renders much of Left-liberal and progressive analysis of Islam superficial, and likely wrong.”

“To get a genuine understanding of a topic as broad and boundless as Islam one needs to both set aside emotional considerations, as Ben Affleck can not, and dig deeply into the richer and more complex empirical texture, which Sam Harris has not.”

“One of the most obnoxious memes in my opinion during the Obama era has been the popularization of the maxim that “The arc of the moral universe is long, but it bends towards justice.” It is smug and self-assured in its presentation. […] too often it becomes an excuse for lazy thinking and shallow prognostication. […] Modern Western liberals have a particular idea of what a religion is, and so naturally know that Islam is in many ways just like United Methodism, except with a hijab and iconoclasm. But a Western liberalism that does not take cultural and religious difference seriously is not serious, and yet all too often it is what we have on offer. […] On both the American Left and Right there is a tendency to not even attempt to understand Islam. Rather, stylized models are preferred which lead to conclusions which are already arrived at.”

“It’s fine to be embarrassed by reality. But you still need to face up to reality. Where Hamid, Harris, and I all start is the fact that the vast majority of the world’s Muslims do not hold views on social issues that are aligned with the Muslim friends of Hollywood actors. […] Before the Green Revolution I told people to expect there to be a Islamic revival, as 86 percent of Egyptians polled agree with the killing of apostates. This is not a comfortable fact for me, as I am technically an apostate.* But it is a fact. Progressives who exhibit a hopefulness about human nature, and confuse majoritarian democracy with liberalism and individual rights, often don’t want to confront these facts. […] Their polar opposites are convinced anti-Muslims who don’t need any survey data, because they know that Muslims have particular views a priori by virtue of them being Muslims. […] There is a glass half-full/half-empty aspect to the Turkish data. 95 percent of Turks do not believe apostates should be killed. This is not surprising, I know many Turkish atheists personally. But, 5 percent is not a reassuring fraction as someone who is personally an apostate. The ideal, and frankly only acceptable, proportion is basically 0 percent.”

“Harris would give a simple explanation for why Islam sanctions the death penalty for apostates. To be reductive and hyperbolic, his perspective seems to be that Islam is a totalitarian cult, and its views are quite explicit in the Quran and the Hadith. Harris is correct here, and the views of the majority of Muslims in Egypt (and many other Muslim nations) has support in Islamic law. The consensus historical tradition is that apostates are subject to the death penalty. […] the very idea of accepting atheists is taboo in most Arab countries”.

“Christianity which Christians hold to be fundamental and constitutive of their religion would have seemed exotic and alien even to St. Paul. Similarly, there is a much smaller body of work which makes the same case for Islam.

A précis of this line of thinking is that non-Muslim sources do not make it clear that there was in fact a coherent new religion which burst forth out of south-central Arabia in the 7th century. Rather, many aspects of Islam’s 7th century were myths which developed over time, initially during the Umayyad period, but which eventually crystallized and matured into orthodoxy under the Abbasids, over a century after the death of Muhammad. This model holds that the Arab conquests were actually Arab conquests, not Muslim ones, and that a predominantly nominally Syrian Christian group of Arab tribes eventually developed a new religion to justify their status within the empire which they built, and to maintain their roles within it. The mawali (convert) revolution under the Abbasids in the latter half of the 8th century transformed a fundamentally Arab ethnic sect, into a universal religion. […] The debate about the historical Jesus only emerged when the public space was secularized enough so that such discussions would not elicit violent hostility from the populace or sanction form the authorities. [T]he fact is that the debate about the historical Muhammad is positively dangerous and thankless. That is not necessarily because there is that much more known about Muhammad than Jesus, it is because post-Christian society allows for an interrogation of Christian beliefs which Islamic society does not allow for in relation to Islam’s founding narratives.”

“When it comes to understanding religion you need to start with psychology. In particular, cognitive psychology. This feeds into the field of evolutionary anthropology in relation to the study of religion. Probably the best introduction to this field is Scott Atran’s dense In Gods We Trust: The Evolutionary Landscape of Religion. Another representative work is Theological Incorrectness: Why Religious People Believe What They Shouldn’t. This area of scholarship purports to explain why religion is ubiquitous, and, why as a phenomenon it tends to exhibit a particular distribution of characteristics.

What cognitive psychology suggests is that there is a strong disjunction between the verbal scripts that people give in terms of what they say they believe, and the internal Gestalt mental models which seem to actually be operative in terms of informing how they truly conceptualize the world. […] Muslims may aver that their god is omniscient and omnipresent, but their narrative stories in response to life circumstances seem to imply that their believe god may not see or know all things at all moments.

The deep problem here is understood [by] religious professionals: they’ve made their religion too complex for common people to understand without their intermediation. In fact, I would argue that theologians themselves don’t really understand what they’re talking about. To some extent this is a feature, not a bug. If the God of Abraham is transformed into an almost incomprehensible being, then religious professionals will have perpetual work as interpreters. […] even today most Muslims can not read the Quran. Most Muslims do not speak Arabic. […] The point isn’t to understand, the point is that they are the Word of God, in the abstract. […] The power of the Quran is that the Word of God is presumably potent. Comprehension is secondary to the command.”

“the majority of the book […] is focused on political and social facts in the Islamic world today. […] That is the best thing about Islamic Exceptionalism, it will put more facts in front of people who are fact-starved, and theory rich. That’s good.”

“the term ‘fundamentalist’ in the context of islam isn’t very informative.” (from the comments).

Below I have added some (very) superficially related links of my own, most of them ‘data-related’ (in general I’d say that I usually find ‘raw data’ more interesting than ‘big ideas’):

*My short review of Theological Correctness, one of the books Razib mentions.

*Of almost 163,000 people who applied for asylum in Sweden last year, less than 500 landed a job (news article).

*An analysis of Danish data conducted by the Rockwool Foundation found that for family-reunificated spouses/relatives etc. to fugitives, 22 % were employed after having lived in Denmark for five years (the family-reunificated individuals, that is, not the fugitives themselves). Only one in three of the family-reunificated individuals had managed to find a job after having stayed here for fifteen years. The employment rate of family-reunificated to immigrants is 49 % for people who have been in the country for 5 years, and the number is below 60 % after 15 years. In Denmark, the employment rate of immigrants from non-Western countries was 47,7 % in November 2013, compared to 73,8 % for people of (…’supposedly’, see also my comments and observations here) Danish origin, according to numbers from Statistics Denmark (link). When you look at the economic performance of the people with fugitive status themselves, 34 % are employed after 5 years, but that number is almost unchanged a decade later – only 37 % are employed after they’ve stayed in Denmark for 15 years.
Things of course sometimes look even worse at the local level than these numbers reflect, because those averages are, well, averages; for example of the 244 fugitives and family-reunificated who had arrived in the Danish Elsinore Municipality within the last three years, exactly 5 of them were in full-time employment.

*Rotherham child sexual exploitation scandal (“The report estimated that 1,400 children had been sexually abused in the town between 1997 and 2013, predominantly by gangs of British-Pakistani Muslim men […] Because most of the perpetrators were of Pakistani heritage, several council staff described themselves as being nervous about identifying the ethnic origins of perpetrators for fear of being thought racist […] It was reported in June 2015 that about 300 suspects had been identified.”)

*A memorial service for the terrorist and murderer Omar El-Hussein who went on a shooting rampage in Copenhagen last year (link) gathered 1500 people, and 600-700 people also participated at the funeral (Danish link).

*Pew asked muslims in various large countries whether they thought ‘Suicide Bombing of Civilian Targets to Defend Islam [can] be Justified?’ More than a third of French muslims think that it can, either ‘often/sometimes’ (16 %) or ‘rarely’ (19 %). Roughly a fourth of British muslims think so as well (15 % often/sometimes, 9 % rarely). Of course in countries like Jordan, Nigeria, and Egypt the proportion of people who do not reply ‘never’ is above 50 %. In such contexts people often like to focus on what the majorities think, but I found it interesting to note that in only 2 of 11 countries (Germany – 7 %, & the US – 8 %) queried was it less than 10 % of muslims who thought suicide bombings were not either ‘often’ or ‘sometimes’ justified. Those numbers are some years old. Newer numbers (from non-Western countries only, unfortunately) tell us that e.g. fewer than two out of five Egyptians (38%) and fewer than three out of five (58%) Turks would answer ‘never’ when asked this question just a couple of years ago, in 2014.

*A few non-data related observations here towards the end. I do think Razib is right that cognitive psychology is a good starting point if you want to ‘understand religion’, but a more general point I would make is that there are many different analytical approaches to these sorts of topics which one might employ, and I think it’s important that one does not privilege any single analytical framework over the others (just to be clear, I’m not saying that Razib’s doing this); different approaches may yield different insights, perhaps at different analytical levels, and combining different approaches is likely to be very useful in order to get ‘the bigger picture’, or at least to not overlook important details. ‘History’, broadly defined, may provide one part of the explanatory model, cognitive psychology another part, mathematical anthropology (e.g. stuff like this) probably also has a role to play, etc., etc.. Survey data, economic figures, scientific literatures on a wide variety of topics like trust, norms, migration analysis, and conflict studies, e.g. those dealing with civil wars, may all help elucidate important questions of interest, if not by adding relevant data then by providing additional methodological approaches/scaffoldings which might be fruitfully employed to make sense of the data that is available.

v. Statistical Portrait of Hispanics in the United States.

vi. The Level and Nature of Autistic Intelligence. Autistics may be smarter than people have been led to believe:

“Autistics are presumed to be characterized by cognitive impairment, and their cognitive strengths (e.g., in Block Design performance) are frequently interpreted as low-level by-products of high-level deficits, not as direct manifestations of intelligence. Recent attempts to identify the neuroanatomical and neurofunctional signature of autism have been positioned on this universal, but untested, assumption. We therefore assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven’s Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults. We conclude that intelligence has been underestimated in autistics.”

I recall that back when I was diagnosed I was subjected to a battery of different cognitive tests of various kinds, and a few of those tests I recall thinking were very difficult, compared to how difficult they somehow ‘ought to be’ – it was like ‘this should be an easy task for someone who has the mental hardware to solve this type of problem, but I don’t seem to have that piece of hardware; I have no idea how to manipulate these objects in my head so that I might answer that question’. This was an at least somewhat unfamiliar feeling to me in a testing context, and I definitely did not have this experience when doing the Mensa admissions test later on, which was based on Raven’s matrices. Despite the fact that all IQ tests are supposed to measure pretty much the same thing I do not find it hard to believe that there are some details here which may complicate matters a bit in specific contexts, e.g. for people whose brains may not be structured quite the same way ‘ordinary brains’ are (to put it very bluntly). But of course this is just one study and a few personal impressions – more research is needed, etc. (Even though the effect size is huge.)

Slightly related to the above is also this link – I must admit that I find the title question quite interesting. I find it very difficult to picture characters featuring in books I’m reading in my mind, and so usually when I read books I don’t form any sort of coherent mental image of what the character looks like. It doesn’t matter to me, I don’t care. I have no idea if this is how other people read (fiction) books, or if they actually imagine what the characters look like more or less continuously while those characters are described doing the things they might be doing; to me it would be just incredibly taxing to keep even a simplified mental model of the physical attributes of a character in my mind for even a minute. I can recall specific traits like left-handedness and similar without much difficulty if I think the trait might have relevance to the plot, which has helped me while reading e.g. Agatha Christie novels before, but actively imagining what people look like in my mind I just find very difficult. I find it weird to think that some people might do something like that almost automatically, without thinking about it.

vii. Computer Science Resources. I recently shared the link with a friend, but of course she was already aware of the existence of this resource. Some people reading along here may not be, so I’ll include the link here. It has a lot of stuff.

June 8, 2016 Posted by | autism, Books, Chess, Computer science, Data, Demographics, Psychology, Random stuff, Religion | Leave a comment

Random stuff

I find it difficult to find the motivation to finish the half-finished drafts I have lying around, so this will have to do. Some random stuff below.

i.

(15.000 views… In some sense that seems really ‘unfair’ to me, but on the other hand I doubt neither Beethoven nor Gilels care; they’re both long dead, after all…)

ii. New/newish words I’ve encountered in books, on vocabulary.com or elsewhere:

Agleyperipeteia, disseverhalidom, replevinsocage, organdie, pouffe, dyarchy, tauricide, temerarious, acharnement, cadger, gravamen, aspersion, marronage, adumbrate, succotash, deuteragonist, declivity, marquetry, machicolation, recusal.

iii. A lecture:

It’s been a long time since I watched it so I don’t have anything intelligent to say about it now, but I figured it might be of interest to one or two of the people who still subscribe to the blog despite the infrequent updates.

iv. A few wikipedia articles (I won’t comment much on the contents or quote extensively from the articles the way I’ve done in previous wikipedia posts – the links shall have to suffice for now):

Duverger’s law.

Far side of the moon.

Preference falsification.

Russian political jokes. Some of those made me laugh (e.g. this one: “A judge walks out of his chambers laughing his head off. A colleague approaches him and asks why he is laughing. “I just heard the funniest joke in the world!” “Well, go ahead, tell me!” says the other judge. “I can’t – I just gave someone ten years for it!”).

Political mutilation in Byzantine culture.

v. World War 2, if you think of it as a movie, has a highly unrealistic and implausible plot, according to this amusing post by Scott Alexander. Having recently read a rather long book about these topics, one aspect I’d have added had I written the piece myself would be that an additional factor making the setting seem even more implausible is how so many presumably quite smart people were so – what at least in retrospect seems – unbelievably stupid when it came to Hitler’s ideas and intentions before the war. Going back to Churchill’s own life I’d also add that if you were to make a movie about Churchill’s life during the war, which you could probably relatively easily do if you were to just base it upon his own copious and widely shared notes, then it could probably be made into a quite decent movie. His own comments, remarks, and observations certainly made for a great book.

May 15, 2016 Posted by | Astronomy, Computer science, History, Language, Lectures, Mathematics, Music, Random stuff, Russia, Wikipedia | Leave a comment

A few lectures

The sound quality of this lecture is not completely optimal – there’s a recurring echo popping up now and then which I found slightly annoying – but this should not keep you from watching the lecture. It’s a quite good lecture, and very accessible – I don’t really think you even need to know anything about genetics to follow most of what he’s talking about here; as far as I can tell it’s a lecture intended for people who don’t really know much about population genetics. He introduces key concepts as they are needed and he does not go much into the technical details which might cause people trouble (this of course also makes the lecture somewhat superficial, but you can’t get everything). If you’re the sort of person who wants details not included in the lecture you’re probably already reading e.g. Razib Khan (who incidentally recently blogged/criticized a not too dissimilar paper from the one discussed in the lecture, dealing with South Asia)…

I must admit that I actually didn’t like this lecture very much, but I figured I might as well include it in this post anyway.

I found some questions included and some aspects of the coverage a bit ‘too basic’ for my taste, but other people interested in chess reading along here may like Anna’s approach better; like Krause’s lecture I think it’s an accessible lecture, despite the fact that it actually covers many lines in quite a bit of detail. It’s a long lecture but I don’t think you necessarily need to watch all of it in one go (…or at all?) – the analysis of the second game, the Kortschnoj-Gheorghiu game, starts around 45 minutes in so that might for example be a good place to include a break, if a break is required.

February 1, 2016 Posted by | Anthropology, Archaeology, Chess, Computer science, Evolutionary biology, Genetics, History, Lectures | Leave a comment

A few lectures

Below are three new lectures from the Institute of Advanced Study. As far as I’ve gathered they’re all from an IAS symposium called ‘Lens of Computation on the Sciences’ – all three lecturers are computer scientists, but you don’t have to be a computer scientist to watch these lectures.

Should computer scientists and economists band together more and try to use the insights from one field to help solve problems in the other field? Roughgarden thinks so, and provides examples of how this might be done/has been done. Applications discussed in the lecture include traffic management and auction design. I’m not sure how much of this lecture is easy to follow for people who don’t know anything about either topic (i.e., computer science and economics), but I found it not too difficult to follow – it probably helped that I’ve actually done work on a few of the things he touches upon in the lecture, such as basic auction theory, the fixed point theorems and related proofs, basic queueing theory and basic discrete maths/graph theory. Either way there are certainly much more technical lectures than this one available at the IAS channel.

I don’t have Facebook and I’m not planning on ever getting a FB account, so I’m not really sure I care about the things this guy is trying to do, but the lecturer does touch upon some interesting topics in network theory. Not a great lecture in my opinion and occasionally I think the lecturer ‘drifts’ a bit, talking without saying very much, but it’s also not a terrible lecture. A few times I was really annoyed that you can’t see where he’s pointing that damn laser pointer, but this issue should not stop you from watching the video, especially not if you have an interest in analytical aspects of how to approach and make sense of ‘Big Data’.

I’ve noticed that Scott Alexander has said some nice things about Scott Aaronson a few times, but until now I’ve never actually read any of the latter guy’s stuff or watched any lectures by him. I agree with Scott (Alexander) that Scott (Aaronson) is definitely a smart guy. This is an interesting lecture; I won’t pretend I understood all of it, but it has some thought-provoking ideas and important points in the context of quantum computing and it’s actually a quite entertaining lecture; I was close to laughing a couple of times.

January 8, 2016 Posted by | Computer science, Economics, Game theory, Lectures, Mathematics, Physics | Leave a comment

Random stuff/Open Thread

i. A lecture on mathematical proofs:

ii. “In the fall of 1944, only seven percent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point.”

From wikipedia’s article on Strategic bombing during WW2. The article has a lot of stuff. The ‘RAF estimates of destruction of “built up areas” of major German cities’ numbers in the article made my head spin – they didn’t bomb the Germans back to the stone age, but they sure tried. Here’s another observation from the article:

“After the war, the U.S. Strategic Bombing Survey reviewed the available casualty records in Germany, and concluded that official German statistics of casualties from air attack had been too low. The survey estimated that at a minimum 305,000 were killed in German cities due to bombing and estimated a minimum of 780,000 wounded. Roughly 7,500,000 German civilians were also rendered homeless.” (The German population at the time was roughly 70 million).

iii. Also war-related: Eddie Slovik:

Edward Donald “Eddie” Slovik (February 18, 1920 – January 31, 1945) was a United States Army soldier during World War II and the only American soldier to be court-martialled and executed for desertion since the American Civil War.[1][2]

Although over 21,000 American soldiers were given varying sentences for desertion during World War II, including 49 death sentences, Slovik’s was the only death sentence that was actually carried out.[1][3][4]

During World War II, 1.7 million courts-martial were held, representing one third of all criminal cases tried in the United States during the same period. Most of the cases were minor, as were the sentences.[2] Nevertheless, a clemency board, appointed by the Secretary of War in the summer of 1945, reviewed all general courts-martial where the accused was still in confinement.[2][5] That Board remitted or reduced the sentence in 85 percent of the 27,000 serious cases reviewed.[2] The death penalty was rarely imposed, and those cases typically were for rapes or murders. […] In France during World War I from 1917 to 1918, the United States Army executed 35 of its own soldiers, but all were convicted of rape and/or unprovoked murder of civilians and not for military offenses.[13] During World War II in all theaters of the war, the United States military executed 102 of its own soldiers for rape and/or unprovoked murder of civilians, but only Slovik was executed for the military offense of desertion.[2][14] […] of the 2,864 army personnel tried for desertion for the period January 1942 through June 1948, 49 were convicted and sentenced to death, and 48 of those sentences were voided by higher authority.”

What motivated me to read the article was mostly curiosity about how many people were actually executed for deserting during the war, a question I’d never encountered any answers to previously. The US number turned out to be, well, let’s just say it’s lower than I’d expected it would be. American soldiers who chose to desert during the war seem to have had much, much better chances of surviving the war than had soldiers who did not. Slovik was not a lucky man. On a related note, given numbers like these I’m really surprised desertion rates were not much higher than they were; presumably community norms (”desertion = disgrace’, which would probably rub off on other family members…’) played a key role here.

iv. Chess and infinity. I haven’t posted this link before even though the thread is a few months old, and I figured that given that I just had a conversation on related matters in the comment section of SCC (here’s a link) I might as well repost some of this stuff here. Some key points from the thread (I had to make slight formatting changes to the quotes because wordpress had trouble displaying some of the numbers, but the content is unchanged):

u/TheBB:
“Shannon has estimated the number of possible legal positions to be about 1043. The number of legal games is quite a bit higher, estimated by Littlewood and Hardy to be around 1010^5 (commonly cited as 1010^50 perhaps due to a misprint). This number is so large that it can’t really be compared with anything that is not combinatorial in nature. It is far larger than the number of subatomic particles in the observable universe, let alone stars in the Milky Way galaxy.

As for your bonus question, a typical chess game today lasts about 40­ to 60 moves (let’s say 50). Let us say that there are 4 reasonable candidate moves in any given position. I suspect this is probably an underestimate if anything, but let’s roll with it. That gives us about 42×50 ≈ 1060 games that might reasonably be played by good human players. If there are 6 candidate moves, we get around 1077, which is in the neighbourhood of the number of particles in the observable universe.”

u/Wondersnite:
“To put 1010^5 into perspective:

There are 1080 protons in the Universe. Now imagine inside each proton, we had a whole entire Universe. Now imagine again that inside each proton inside each Universe inside each proton, you had another Universe. If you count up all the protons, you get (1080 )3 = 10240, which is nowhere near the number we’re looking for.

You have to have Universes inside protons all the way down to 1250 steps to get the number of legal chess games that are estimated to exist. […]

Imagine that every single subatomic particle in the entire observable universe was a supercomputer that analysed a possible game in a single Planck unit of time (10-43 seconds, the time it takes light in a vacuum to travel 10-20 times the width of a proton), and that every single subatomic particle computer was running from the beginning of time up until the heat death of the Universe, 101000 years ≈ 1011 × 101000 seconds from now.

Even in these ridiculously favorable conditions, we’d only be able to calculate

1080 × 1043 × 1011 × 101000 = 101134

possible games. Again, this doesn’t even come close to 1010^5 = 10100000 .

Basically, if we ever solve the game of chess, it definitely won’t be through brute force.”

v. An interesting resource which a friend of mine recently shared with me and which I thought I should share here as well: Nature Reviews – Disease Primers.

vi. Here are some words I’ve recently encountered on vocabulary.com: augury, spangle, imprimatur, apperception, contrition, ensconce, impuissance, acquisitive, emendation, tintinnabulation, abalone, dissemble, pellucid, traduce, objurgation, lummox, exegesis, probity, recondite, impugn, viscid, truculence, appurtenance, declivity, adumbrate, euphony, educe, titivate, cerulean, ardour, vulpine.

May 16, 2015 Posted by | Chess, Computer science, History, Language, Lectures, Mathematics | Leave a comment

Belief-Based Stability in Coalition Formation with Uncertainty…

“In this book we present several novel concepts in cooperative game theory, but from a computer scientist’s point of view. Especially, we will look at a type of games called non-transferable utility games. […] In this book, we extend the classic stability concept of the non-transferable utility core by proposing new belief-based stability criteria under uncertainty, and illustrate how the new concept can be used to analyse the stability of a new type of belief-based coalition formation game. Mechanisms for reaching solutions of the new stable criteria are proposed and some real life application examples are studied. […] In Chapter 1, we first provide an introduction of topics in game theory that are relevant to the concepts discussed in this book. In Chapter 2, we review some relevant works from the literature, especially in cooperative game theory and multi-agent coalition formation problems. In Chapter 3, we discuss the effect of uncertainty in the agent’s beliefs on the stability of the games. A rule-based approach is adopted and the concepts of strong core and weak core are introduced. We also discuss the effect of precision of the beliefs on the stability of the coalitions. In Chapter 4, we introduce private beliefs in non-transferable utility (NTU) games, so that the preferences of the agents are no longer common knowledge. The impact of belief accuracy on stability is also examined. In Chapter 5, we study an application of the proposed belief-based stability concept, namely the buyer coalition problem, and we see how the proposed concept can be used in the evaluation of this multi-agent coalition formation problem. In Chapter 6, we combine the works of earlier chapters and produce a complete picture of the introduced concepts: non-transferable utility games with private beliefs and uncertainty. We conclude this book in Chapter 7.”

The above quote is from the preface of the book, which I finished yesterday. It deals with some issues I was slightly annoyed about not being covered in a previous micro course; my main problem being that it seemed to me back then that the question of belief accuracy and the role of this variable was not properly addressed in the models we looked at (‘people can have mistaken beliefs, and it seems obvious that the ways in which they’re wrong can affect which solutions are eventually reached’). The book makes the point that if you look at coalition formation in a context where it is not reasonable to assume that information is shared among coalition partners (because it is in the interest of the participants to keep their information/preferences/willingness to pay private), then the beliefs of the potential coalition partners may play a major role in determining which coalitions are feasible and which are ruled out. A key point is that in the model context explored by the authors, inaccurate beliefs of agents will expand the number of potential coalitions which are available, although coalition options ruled out by accurate beliefs are less stable than ones which are not. They do not discuss the fact that this feature is unquestionably a result of implicit assumptions made along the way which may not be true, and that inaccurate beliefs may also in some contexts conceivably lead to lower solution support in general (e.g. through variables such as disagreement, or, to think more in terms of concepts specifically included in their model framework, higher general instability of solutions which can feasibly be reached, making agents less likely to explore the option of participating in coalitions in the first place due to the lower payoffs associated with the available coalitions likely to be reached – dynamics such as these are not included in the coverage). I decided early on to not blog the stuff in this book in major detail because it’s not the kind of book where this makes sense to do (in my opinion), but if you’re curious about how they proceed, they talk quite a bit about the (classical) Core and discuss why this is not an appropriate solution concept to apply in the contexts they explore, and they then proceed to come up with new and better solution criteria, developed with the aid of some new variables and definitions along the way, in order to end up with some better solution concepts, their so-called ‘belief-based cores’, which are perhaps best thought of as extensions of the classical core concept. I should perhaps point out, as this may not be completely clear, that the beliefs they talk about deal both with the ‘state of nature’ (which in part of the coverage is assumed to be basically unobservable) and the preferences of agents involved.

If you want a sort of bigger picture idea of what this book is about, I should point out that in general you have two major sub-fields of game theory, dealing with cooperative and non-cooperative games respectively. Within the sub-field of cooperative games, a distinction is made between games and settings where utilities are transferable, and games/settings where they are not. This book belongs in the latter category; it deals with cooperative games in which utilities are non-transferable. The authors in the beginning make a big deal out of the distinction between whether or not utilities are transferable, and claim that the assumption that they’re not is the more plausible one; whereas they do have a point, I however also actually think the non-transferability assumption might in some of the specific examples included in the book be a borderline questionable assumption. To give an example, the non-transferability assumption seems in one context to imply that all potential coalition partners have the same amount of bargaining power. This assumption is plausible in some contexts, but wildly implausible in others (and I’m not sure the authors would agree with me about which contexts would belong to which category).

The professor teaching the most recent course in micro I took had a background in computer science, rather than economics – he was also Asian, but this perhaps goes without saying. This book is supposedly a computer science book, and they argue in the introduction that: “instead of looking at human beings, we study the problem from an intelligent software agent’s perspective.” However I don’t think a single one of the examples included in the book would be an example you could not also have found in a classic micro text, and it’s really hard to tell in many parts of the coverage that the authors aren’t economists with a background in micro – there seems to be quite a bit of field overlap here (this field overlap incidentally extends to areas of economics besides micro, is my impression; one econometrics TA I had, teaching the programming part of the course, was also a CS major). In the book they talk a bit about coalition formation mechanisms and approaches, such as propose-and-evaluate mechanisms and auction approaches, and they also touch briefly upon stuff like mechanism design. They state in the description that: “The book is intended for graduate students, engineers, and researchers in the field of artificial intelligence and computer science.” I think it’s really weird that they don’t include (micro-)economists as well, because this stuff is obviously quite close to/potentially relevant to the kind of work some of these people are working on.

There are a lot of definitions, theorems, and proofs in this book, and as usual when doing work on game theory you need to think very carefully about the stuff they cover to be able to follow it, but I actually found it reasonably accessible – the book is not terribly difficult to read. Though I would probably advise you against reading the book if you have not at least read an intro text on game theory. Although as already mentioned the book deals with an analytical context in which utilities are non-transferable, it should be pointed out that this assumption is sort of implicit in the coverage, in the sense that the authors don’t really deal with utility functions at all; the book only deals with preference relations, not utility functions, so it probably helps to be familiar with this type of analysis (e.g. by having studied (solved some problems) dealing with the kind of stuff included in the coverage in chapter 1 of Mas-Colell).

Part of the reason why I gave the book only two stars is that the authors are Chinese and their English is terrible. Another reason is that as is usually the case in game theory, these guys spend a lot of time and effort being very careful to define their terms and make correct inferences from the assumptions they make – but they don’t really end up saying very much.

February 28, 2015 Posted by | Books, Computer science, Economics | Leave a comment

Wikipedia articles of interest

i. Trade and use of saffron.

Saffron has been a key seasoning, fragrance, dye, and medicine for over three millennia.[1] One of the world’s most expensive spices by weight,[2] saffron consists of stigmas plucked from the vegetatively propagated and sterile Crocus sativus, known popularly as the saffron crocus. The resulting dried “threads”[N 1] are distinguished by their bitter taste, hay-like fragrance, and slight metallic notes. The saffron crocus is unknown in the wild; its most likely precursor, Crocus cartwrightianus, originated in Crete or Central Asia;[3] The saffron crocus is native to Southwest Asia and was first cultivated in what is now Greece.[4][5][6]

From antiquity to modern times the history of saffron is full of applications in food, drink, and traditional herbal medicine: from Africa and Asia to Europe and the Americas the brilliant red threads were—and are—prized in baking, curries, and liquor. It coloured textiles and other items and often helped confer the social standing of political elites and religious adepts. Ancient peoples believed saffron could be used to treat stomach upsets, bubonic plague, and smallpox.

Saffron crocus cultivation has long centred on a broad belt of Eurasia bounded by the Mediterranean Sea in the southwest to India and China in the northeast. The major producers of antiquity—Iran, Spain, India, and Greece—continue to dominate the world trade. […] Iran has accounted for around 90–93 percent of recent annual world production and thereby dominates the export market on a by-quantity basis. […]

The high cost of saffron is due to the difficulty of manually extracting large numbers of minute stigmas, which are the only part of the crocus with the desired aroma and flavour. An exorbitant number of flowers need to be processed in order to yield marketable amounts of saffron. Obtaining 1 lb (0.45 kg) of dry saffron requires the harvesting of some 50,000 flowers, the equivalent of an association football pitch’s area of cultivation, or roughly 7,140 m2 (0.714 ha).[14] By another estimate some 75,000 flowers are needed to produce one pound of dry saffron. […] Another complication arises in the flowers’ simultaneous and transient blooming. […] Bulk quantities of lower-grade saffron can reach upwards of US$500 per pound; retail costs for small amounts may exceed ten times that rate. In Western countries the average retail price is approximately US$1,000 per pound.[5] Prices vary widely elsewhere, but on average tend to be lower. The high price is somewhat offset by the small quantities needed in kitchens: a few grams at most in medicinal use and a few strands, at most, in culinary applications; there are between 70,000 and 200,000 strands in a pound.”

ii. Scramble for Africa.

“The “Scramble for Africa” (also the Partition of Africa and the Conquest of Africa) was the invasion and occupation, colonization and annexation of African territory by European powers during the period of New Imperialism, between 1881 and 1914. In 1870, 10 percent of Africa was under European control; by 1914 it was 90 percent of the continent, with only Abyssinia (Ethiopia) and Liberia still independent.”

Here’s a really neat illustration from the article:

Scramble-for-Africa-1880-1913

“Germany became the third largest colonial power in Africa. Nearly all of its overall empire of 2.6 million square kilometres and 14 million colonial subjects in 1914 was found in its African possessions of Southwest Africa, Togoland, the Cameroons, and Tanganyika. Following the 1904 Entente cordiale between France and the British Empire, Germany tried to isolate France in 1905 with the First Moroccan Crisis. This led to the 1905 Algeciras Conference, in which France’s influence on Morocco was compensated by the exchange of other territories, and then to the Agadir Crisis in 1911. Along with the 1898 Fashoda Incident between France and Britain, this succession of international crises reveals the bitterness of the struggle between the various imperialist nations, which ultimately led to World War I. […]

David Livingstone‘s explorations, carried on by Henry Morton Stanley, excited imaginations. But at first, Stanley’s grandiose ideas for colonisation found little support owing to the problems and scale of action required, except from Léopold II of Belgium, who in 1876 had organised the International African Association (the Congo Society). From 1869 to 1874, Stanley was secretly sent by Léopold II to the Congo region, where he made treaties with several African chiefs along the Congo River and by 1882 had sufficient territory to form the basis of the Congo Free State. Léopold II personally owned the colony from 1885 and used it as a source of ivory and rubber.

While Stanley was exploring Congo on behalf of Léopold II of Belgium, the Franco-Italian marine officer Pierre de Brazza travelled into the western Congo basin and raised the French flag over the newly founded Brazzaville in 1881, thus occupying today’s Republic of the Congo. Portugal, which also claimed the area due to old treaties with the native Kongo Empire, made a treaty with Britain on 26 February 1884 to block off the Congo Society’s access to the Atlantic.

By 1890 the Congo Free State had consolidated its control of its territory between Leopoldville and Stanleyville, and was looking to push south down the Lualaba River from Stanleyville. At the same time, the British South Africa Company of Cecil Rhodes was expanding north from the Limpopo River, sending the Pioneer Column (guided by Frederick Selous) through Matabeleland, and starting a colony in Mashonaland.

To the West, in the land where their expansions would meet, was Katanga, site of the Yeke Kingdom of Msiri. Msiri was the most militarily powerful ruler in the area, and traded large quantities of copper, ivory and slaves — and rumours of gold reached European ears. The scramble for Katanga was a prime example of the period. Rhodes and the BSAC sent two expeditions to Msiri in 1890 led by Alfred Sharpe, who was rebuffed, and Joseph Thomson, who failed to reach Katanga. Leopold sent four CFS expeditions. First, the Le Marinel Expedition could only extract a vaguely worded letter. The Delcommune Expedition was rebuffed. The well-armed Stairs Expedition was given orders to take Katanga with or without Msiri’s consent. Msiri refused, was shot, and the expedition cut off his head and stuck it on a pole as a “barbaric lesson” to the people. The Bia Expedition finished the job of establishing an administration of sorts and a “police presence” in Katanga.

Thus, the half million square kilometres of Katanga came into Leopold’s possession and brought his African realm up to 2,300,000 square kilometres (890,000 sq mi), about 75 times larger than Belgium. The Congo Free State imposed such a terror regime on the colonised people, including mass killings and forced labour, that Belgium, under pressure from the Congo Reform Association, ended Leopold II’s rule and annexed it in 1908 as a colony of Belgium, known as the Belgian Congo. […]

“Britain’s administration of Egypt and the Cape Colony contributed to a preoccupation over securing the source of the Nile River. Egypt was overrun by British forces in 1882 (although not formally declared a protectorate until 1914, and never an actual colony); Sudan, Nigeria, Kenya and Uganda were subjugated in the 1890s and early 20th century; and in the south, the Cape Colony (first acquired in 1795) provided a base for the subjugation of neighbouring African states and the Dutch Afrikaner settlers who had left the Cape to avoid the British and then founded their own republics. In 1877, Theophilus Shepstone annexed the South African Republic (or Transvaal – independent from 1857 to 1877) for the British Empire. In 1879, after the Anglo-Zulu War, Britain consolidated its control of most of the territories of South Africa. The Boers protested, and in December 1880 they revolted, leading to the First Boer War (1880–81). British Prime Minister William Gladstone signed a peace treaty on 23 March 1881, giving self-government to the Boers in the Transvaal. […] The Second Boer War, fought between 1899 and 1902, was about control of the gold and diamond industries; the independent Boer republics of the Orange Free State and the South African Republic (or Transvaal) were this time defeated and absorbed into the British Empire.”

There are a lot of unsourced claims in the article and some parts of it actually aren’t very good, but this is a topic about which I did not know much (I had no idea most of colonial Africa was acquired by the European powers as late as was actually the case). This is another good map from the article to have a look at if you just want the big picture.

iii. Cursed soldiers.

“The cursed soldiers (that is, “accursed soldiers” or “damned soldiers”; Polish: Żołnierze wyklęci) is a name applied to a variety of Polish resistance movements formed in the later stages of World War II and afterwards. Created by some members of the Polish Secret State, these clandestine organizations continued their armed struggle against the Stalinist government of Poland well into the 1950s. The guerrilla warfare included an array of military attacks launched against the new communist prisons as well as MBP state security offices, detention facilities for political prisoners, and concentration camps set up across the country. Most of the Polish anti-communist groups ceased to exist in the late 1940s or 1950s, hunted down by MBP security services and NKVD assassination squads.[1] However, the last known ‘cursed soldier’, Józef Franczak, was killed in an ambush as late as 1963, almost 20 years after the Soviet take-over of Poland.[2][3] […] Similar eastern European anti-communists fought on in other countries. […]

Armia Krajowa (or simply AK)-the main Polish resistance movement in World War II-had officially disbanded on 19 January 1945 to prevent a slide into armed conflict with the Red Army, including an increasing threat of civil war over Poland’s sovereignty. However, many units decided to continue on with their struggle under new circumstances, seeing the Soviet forces as new occupiers. Meanwhile, Soviet partisans in Poland had already been ordered by Moscow on June 22, 1943 to engage Polish Leśni partisans in combat.[6] They commonly fought Poles more often than they did the Germans.[4] The main forces of the Red Army (Northern Group of Forces) and the NKVD had begun conducting operations against AK partisans already during and directly after the Polish Operation Tempest, designed by the Poles as a preventive action to assure Polish rather than Soviet control of the cities after the German withdrawal.[5] Soviet premier Joseph Stalin aimed to ensure that an independent Poland would never reemerge in the postwar period.[7] […]

The first Polish communist government, the Polish Committee of National Liberation, was formed in July 1944, but declined jurisdiction over AK soldiers. Consequently, for more than a year, it was Soviet agencies like the NKVD that dealt with the AK. By the end of the war, approximately 60,000 soldiers of the AK had been arrested, and 50,000 of them were deported to the Soviet Union’s gulags and prisons. Most of those soldiers had been captured by the Soviets during or in the aftermath of Operation Tempest, when many AK units tried to cooperate with the Soviets in a nationwide uprising against the Germans. Other veterans were arrested when they decided to approach the government after being promised amnesty. In 1947, an amnesty was passed for most of the partisans; the Communist authorities expected around 12,000 people to give up their arms, but the actual number of people to come out of the forests eventually reached 53,000. Many of them were arrested despite promises of freedom; after repeated broken promises during the first few years of communist control, AK soldiers stopped trusting the government.[5] […]

The persecution of the AK members was only a part of the reign of Stalinist terror in postwar Poland. In the period of 1944–56, approximately 300,000 Polish people had been arrested,[21] or up to two million, by different accounts.[5] There were 6,000 death sentences issued, the majority of them carried out.[21] Possibly, over 20,000 people died in communist prisons including those executed “in the majesty of the law” such as Witold Pilecki, a hero of Auschwitz.[5] A further six million Polish citizens (i.e., one out of every three adult Poles) were classified as suspected members of a ‘reactionary or criminal element’ and subjected to investigation by state agencies.”

iv. Affective neuroscience.

Affective neuroscience is the study of the neural mechanisms of emotion. This interdisciplinary field combines neuroscience with the psychological study of personality, emotion, and mood.[1]

This article is actually related to the Delusion and self-deception book, which covered some of the stuff included in this article, but I decided I might as well include the link in this post. I think some parts of the article are written in a somewhat different manner than most wiki articles – there are specific paragraphs briefly covering the results of specific meta-analyses conducted in this field. I can’t really tell from this article if I actually like this way of writing a wiki article or not.

v. Hamming distance. Not a long article, but this is a useful concept to be familiar with:

“In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. […]

The Hamming distance is named after Richard Hamming, who introduced it in his fundamental paper on Hamming codes Error detecting and error correcting codes in 1950.[1] It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate.”

vi. Menstrual synchrony. I came across that one recently in a book, and when I did it was obvious that the author had not read this article, and lacked some knowledge included in this article (the phenomenon was assumed to be real in the coverage, and theory was developed assuming it was real which would not make sense if it was not). I figured if that person didn’t know this stuff, a lot of other people – including people reading along here – probably also do not, so I should cover this topic somewhere. This is an obvious place to do so. Okay, on to the article coverage:

Menstrual synchrony, also called the McClintock effect,[2] is the alleged process whereby women who begin living together in close proximity experience their menstrual cycle onsets (i.e., the onset of menstruation or menses) becoming closer together in time than previously. “For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period.”[3]

Martha McClintock’s 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two women or more women become closer together in time than they were several months earlier.[3] Several mechanisms have been hypothesized to cause synchronization.[4]

After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony including McClintock’s study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. A 2013 review of menstrual synchrony concluded that menstrual synchrony is doubtful.[4] […] in a recent systematic review of menstrual synchrony, Harris and Vitzthum concluded that “In light of the lack of empirical evidence for MS [menstrual synchrony] sensu stricto, it seems there should be more widespread doubt than acceptance of this hypothesis.” […]

The experience of synchrony may be the result of the mathematical fact that menstrual cycles of different frequencies repeatedly converge and diverge over time and not due to a process of synchronization.[12] It may also be due to the high probability of menstruation overlap that occurs by chance.[6]

 

December 4, 2014 Posted by | Biology, Botany, Computer science, Geography, History, Medicine, Neurology, Psychology, Wikipedia | Leave a comment

A few lectures

A few lectures from Gresham College:

An interesting lecture on symmetry patterns and symmetry breaking. A lot of the discussion of the relevant principles takes animal skin patterns and -movement patterns as the starting point for the analysis, leading to interesting quotes/observations like these: “Theorem: A spotted animal can have a striped tail, but a striped animal cannot have a spotted tail”, and “…but it can’t result in a horse, because a horse is not spherically symmetric”.

He also talks about e.g. snowflakes and sand dunes and this does not feel like a theoretical lecture at all – he’s sort of employing an applied maths approach to this topic which I like. Despite the fact that it’s basically a mathematics lecture it’s quite easy to follow and I enjoyed watching it.

He takes a long time to get started and he doesn’t actually ever say much about the non-Euclidian stuff (he never even explicitly distinguishes hyperbolic geometry from elliptic geometry using those terms). He’s also not completely precise in his language during the entire lecture; at one point he emphasizes the fact that three specific choices used in a proof were ‘mutually exclusive’ as though that was what was the key, even though what’s actually critical is that they were also collectively exhaustive – a point he fails to mention (and I’d assume it would be easy for a viewer not reasonably well-versed in mathematics to mix up these distinctions if they were not already familiar with the concepts). But maybe you’ll find it interesting anyway. It wasn’t a particularly bad lecture, I’d just expected a little more. I know where to go look if one wants a more complete picture of the things briefly touched upon in this lecture and I’ve looked at that stuff before, but I’m certainly not going to read Penrose again any time soon – that stuff’s way too much work considering the benefits of knowing that stuff in details (if I’m even theoretically able to obtain knowledge of the details – some of that stuff is really hard).

December 7, 2013 Posted by | Biology, Computer science, History, Lectures, Mathematics | 2 Comments

Stuff

i. Econometric methods for causal evaluation of education policies and practices: a non-technical guide. This one is ‘work-related’; in one of my courses I’m writing a paper and this working paper is one (of many) of the sources I’m planning on using. Most of the papers I work with are unfortunately not freely available online, which is part of why I haven’t linked to them here on the blog.

I should note that there are no equations in this paper, so you should focus on the words ‘a non-technical guide’ rather than the words ‘econometric methods’ in the title – I think this is a very readable paper for the non-expert as well. I should of course also note that I have worked with most of these methods in a lot more detail, and that without the math it’s very hard to understand the details and really know what’s going on e.g. when applying such methods – or related methods such as IV methods on panel data, a topic which was covered in another class just a few weeks ago but which is not covered in this paper.

This is a place to start if you want to know something about applied econometric methods, particularly if you want to know how they’re used in the field of educational economics, and especially if you don’t have a strong background in stats or math. It should be noted that some of the methods covered see wide-spread use in other areas of economics as well; IV is widely used, and the difference-in-differences estimator have seen a lot of applications in health economics.

ii. Regulating the Way to Obesity: Unintended Consequences of Limiting Sugary Drink Sizes. The law of unintended consequences strikes again.

You could argue with some of the assumptions made here (e.g. that prices (/oz) remain constant) but I’m not sure the findings are that sensitive to that assumption, and without an explicit model of the pricing mechanism at work it’s mostly guesswork anyway.

iii. A discussion about the neurobiology of memory. Razib Khan posted a short part of the video recently, so I decided to watch it today. A few relevant wikipedia links: Memory, Dead reckoning, Hebbian theory, Caenorhabditis elegans. I’m skeptical, but I agree with one commenter who put it this way: “I know darn well I’m too ignorant to decide whether Randy is possibly right, or almost certainly wrong — yet I found this interesting all the way through.” I also agree with another commenter who mentioned that it’d have been useful for Gallistel to go into details about the differences between short term and long term memory and how these differences relate to the problem at hand.

iv. Plos-One: Low Levels of Empathic Concern Predict Utilitarian Moral Judgment.

“An extensive body of prior research indicates an association between emotion and moral judgment. In the present study, we characterized the predictive power of specific aspects of emotional processing (e.g., empathic concern versus personal distress) for different kinds of moral responders (e.g., utilitarian versus non-utilitarian). Across three large independent participant samples, using three distinct pairs of moral scenarios, we observed a highly specific and consistent pattern of effects. First, moral judgment was uniquely associated with a measure of empathy but unrelated to any of the demographic or cultural variables tested, including age, gender, education, as well as differences in “moral knowledge” and religiosity. Second, within the complex domain of empathy, utilitarian judgment was consistently predicted only by empathic concern, an emotional component of empathic responding. In particular, participants who consistently delivered utilitarian responses for both personal and impersonal dilemmas showed significantly reduced empathic concern, relative to participants who delivered non-utilitarian responses for one or both dilemmas. By contrast, participants who consistently delivered non-utilitarian responses on both dilemmas did not score especially high on empathic concern or any other aspect of empathic responding.”

In case you were wondering, the difference hasn’t got anything to do with a difference in the ability to ‘see things from the other guy’s point of view’: “the current study demonstrates that utilitarian responders may be as capable at perspective taking as non-utilitarian responders. As such, utilitarian moral judgment appears to be specifically associated with a diminished affective reactivity to the emotions of others (empathic concern) that is independent of one’s ability for perspective taking”.

On a small sidenote, I’m not really sure I get the authors at all – one of the questions they ask in the paper’s last part is whether ‘utilitarians are simply antisocial?’ This is such a stupid way to frame this I don’t even know how to begin to respond; I mean, utilitarians make better decisions that save more lives, and that’s consistent with them being antisocial? I should think the ‘social’ thing to do would be to save as many lives as possible. Dead people aren’t very social, and when your actions cause more people to die they also decrease the scope for future social interaction.

v. Lastly, some Khan Academy videos:

(Relevant links: Compliance, Preload).

(This one may be very hard to understand if you haven’t covered this stuff before, but I figured I might as well post it here. If you don’t know e.g. what myosin and actin is you probably won’t get much out of this video. If you don’t watch it, this part of what’s covered is probably the most important part to take away from it.)

It’s been a long time since I checked out the Brit Cruise information theory playlist, and I was happy to learn that he’s updated it and added some more stuff. I like the way he combines historical stuff with a ‘how does it actually work, and how did people realize that’s how it works’ approach – learning how people figured out stuff is to me sometimes just as fascinating as learning what they figured out:

(Relevant wikipedia links: Leyden jar, Electrostatic generator, Semaphore line. Cruise’ play with the cat and the amber may look funny, but there’s a point to it: “The Greek word for amber is ηλεκτρον (“elektron”) and is the origin of the word “electricity”.” – from the first link).

(Relevant wikipedia links: Galvanometer, Morse code)

April 14, 2013 Posted by | Cardiology, Computer science, Cryptography, Econometrics, Khan Academy, Medicine, Neurology, Papers, Physics, Random stuff, Statistics | Leave a comment

Khan Academy videos of interest

Took me a minute to solve without hints. I had to scribble a few numbers down (like Khan does in the video), but you should be able to handle it without hints. (Actually I think some of the earlier brainteasers on the playlist are harder than this one and that some of the later ones are easier, but it’s a while since I saw the first ones.)


Much more here.

Naturally this is from the computer science section.

It’s been a while since I’ve last been to Khan Academy – it seems that these days they have an entire section about influenza.

February 10, 2013 Posted by | Cardiology, Computer science, Infectious disease, Khan Academy, Lectures, Mathematics, Medicine | Leave a comment

Wikipedia articles of interest

i. Huia (featured).

“The Huia (Māori: [ˈhʉia]; Heteralocha acutirostris) was the largest species of New Zealand wattlebird and was endemic to the North Island of New Zealand.”

What they looked like:

800px-Keulemans_Huias

“Even though the Huia is frequently mentioned in biology and ornithology textbooks because of this striking dimorphism, not much is known about its biology; it was little studied before it was driven to extinction. The Huia is one of New Zealand’s best known extinct birds because of its bill shape, its sheer beauty and special place in Māori culture and oral tradition. […]

The Huia had no fear of people; females allowed themselves to be handled on the nest,[8] and birds could easily be captured by hand.[11] […]

The Huia was found throughout the North Island before humans arrived in New Zealand. The Māori arrived around 800 years ago, and by the arrival of European settlers in the 1840s, habitat destruction and hunting had reduced the bird’s range to the southern North Island.[13] However, Māori hunting pressures on the Huia were limited to some extent by traditional protocols. The hunting season was from May to July when the bird’s plumage was in prime condition, while a rāhui (hunting ban) was enforced in spring and summer.[15] It was not until European settlement that the Huia’s numbers began to decline severely, due mainly to two well-documented factors: widespread deforestation and overhunting. […]

Habitat destruction and the predations of introduced species were problems faced by all New Zealand birds, but in addition the Huia faced massive pressure from hunting. Due to its pronounced sexual dimorphism and its beauty, Huia were sought after as mounted specimens by wealthy collectors in Europe[42] and by museums all over the world.[15][20] These individuals and institutions were willing to pay large sums of money for good specimens, and the overseas demand created a strong financial incentive for hunters in New Zealand.[42]

ii. British colonization of the Americas. Not very detailed, but this article is a good place to start if one wants to read about the various colonies; it has a lot of links.

iii. Iron Dome.

Iron Dome (Hebrew: כִּפַּת בַּרְזֶל, kipat barzel) also known as “Iron Cap[6] is a mobile all-weather air defense system[5] developed by Rafael Advanced Defense Systems.[4] It is a missile system designed to intercept and destroy short-range rockets and artillery shells fired from distances of 4 to 70 kilometers away and whose trajectory would take them to a populated area.[7][8] […] The system, created as a defensive countermeasure to the rocket threat against Israel‘s civilian population on its northern and southern borders, uses technology first employed in Rafael’s SPYDER system. Iron Dome was declared operational and initially deployed on 27 March 2011 near Beersheba.[10] On 7 April 2011, the system successfully intercepted a Grad rocket launched from Gaza for the first time.[11] On 10 March 2012, The Jerusalem Post reported that the system shot down 90% of rockets launched from Gaza that would have landed in populated areas.[8] By November 2012, it had intercepted 400+ rockets.[12][13] Based on this success, Defense reporter Mark Thompson estimates that Iron Dome is the most effective and most tested missile shield in existence.[14]

The Iron Dome system is also effective against aircraft up to an altitude of 32,800 ft (10,000 m).[15] […]

800px-Iron_Dome_near_Sderot

During the 2006 Second Lebanon War, approximately 4,000 Hezbollah-fired rockets (the great majority of which were short-range Katyusha rockets) landed in northern Israel, including on Haifa, the country’s third largest city. The massive rocket barrage killed 44 Israeli civilians[16] and caused some 250,000 Israeli citizens to evacuate and relocate to other parts of Israel while an estimated 1,000,000 Israelis were confined in or near shelters during the conflict.[17]

To the south, more than 4,000 rockets and 4,000 mortar bombs were fired into Israel from Gaza between 2000 and 2008, principally by Hamas. Almost all of the rockets fired were Qassams launched by 122 mm Grad launchers smuggled into the Gaza Strip, giving longer range than other launch methods. Nearly 1,000,000 Israelis living in the south are within rocket range, posing a serious security threat to the country and its citizens.[18]

In February 2007, Defense Minister Amir Peretz selected Iron Dome as Israel’s defensive solution to this short-range rocket threat.[19] […]

In November 2012, during Operation Pillar of Defense, the Iron Dome’s effectiveness was estimated by Israeli officials at between 75 and 95 percent.[88] According to Israeli officials, of the approximately 1,000 missiles and rockets fired into Israel by Hamas from the beginning of Operation Pillar of Defense up to November 17, 2012, Iron Dome identified two thirds as not posing a threat and intercepted 90 percent of the remaining 300.[89] During this period the only Israeli casualties were three individuals killed in missile attacks after a malfunction of the Iron Dome system.[90]

In comparison with other air defense systems, the effectiveness rate of Iron Dome is very high.[88]

iv. Evolution of cetaceans (whales and dolphins). They’re a lot ‘younger’ than I thought.

v. Curiosity rover.

PIA16239_High-Resolution_Self-Portrait_by_Curiosity_Rover_Arm_Camera

This is an actual (composite) picture of a robot on another planet. At this moment it is walking around doing scientific experiments. On another planet. I’ll say it again: Living in the 21st century is awesome.

vi. Halting Problem.

“In computability theory, the halting problem can be stated as follows: “Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever“. This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem. […]

The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e. all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program’s execution; it can take arbitrarily long, and use arbitrarily much storage space, before halting. The question is simply whether the given program will ever halt on a particular input. […]

One approach to the problem might be to run the program for some number of steps and check if it halts. But if the program does not halt, it is unknown whether the program will eventually halt or run forever.

Turing proved there cannot exist an algorithm which will always correctly decide whether, for a given arbitrary program and its input, determine the program halts when run with that input; the essence of Turing’s proof is that any such algorithm can be made to contradict itself, and therefore cannot be correct. […]

The halting problem is historically important because it was one of the first problems to be proved undecidable.”

vii. Fetal Alcohol Syndrome.

Fetal alcohol syndrome (FAS) is a pattern of mental and physical defects that can develop in a fetus in association with high levels of alcohol consumption during pregnancy. […]

Alcohol crosses the placental barrier and can stunt fetal growth or weight, create distinctive facial stigmata, damage neurons and brain structures, which can result in psychological or behavioral problems, and cause other physical damage.[6][7][8] Surveys found that in the United States, 10–15% of pregnant women report having recently drunk alcohol, and up to 30% drink alcohol at some point during pregnancy.[9][10][11]

The main effect of FAS is permanent central nervous system damage, especially to the brain. Developing brain cells and structures can be malformed or have development interrupted by prenatal alcohol exposure; this can create an array of primary cognitive and functional disabilities (including poor memory, attention deficits, impulsive behavior, and poor cause-effect reasoning) as well as secondary disabilities (for example, predispositions to mental health problems and drug addiction).[8][12] Alcohol exposure presents a risk of fetal brain damage at any point during a pregnancy, since brain development is ongoing throughout pregnancy.[13]

Fetal alcohol exposure is the leading known cause of mental retardation in the Western world.[14][15] In the United States and Europe, the FAS prevalence rate is estimated to be between 0.2-2 in every 1000 live births.[16][17] FAS should not be confused with Fetal Alcohol Spectrum Disorders (FASD), a condition which describes a continuum of permanent birth defects caused by maternal consumption of alcohol during pregnancy, which includes FAS, as well as other disorders, and which affects about 1% of live births in the US.[18][19][20][21] The lifetime medical and social costs of FAS are estimated to be as high as US$800,000 per child born with the disorder.[22]

That’s a US estimate, but I think a Danish one would be within the same order of magnitude. Imagine how the incentives of expectant mothers would change if we fined females who gave birth to a child with FAS, letting the fine be some fraction of the total estimated social costs. And remind me again why we do not do this?

December 15, 2012 Posted by | Astronomy, Biology, Computer science, Evolutionary biology, History, Mathematics, Medicine, Neurology, Wikipedia, Zoology | Leave a comment

Stuff

Some links and stuff from around the web:

i. A lecture on Averaging algorithms and distributed optimization. He’s quite good but this is not for everyone; you need a maths/stats background to some extent to understand what’s going on. I’ve seen many types of lectures online, but this one is probably one of the ones ‘closest’ to the type of lectures that are available to students where I study the kind of stuff I study, in terms of the format; there’s a lot of math, there’s a very clearly defined structure and the lecturer knows exactly what he’s supposed to cover during the lecture, you proceed from the simple and then add some complexity/exceptions etc. along the way, some i’s and j’s will be mixed up and a plus or minus sign will need to be corrected somewhere, the lecturer rarely asks the people attending class any questions and if it’s a good lecture there will not be a lot of questions from the audience either. It reminded me of the econometrics lectures I had some time ago, also because the stuff covered in the lecture relates a bit to material covered back then (‘gradient-like methods’, the convergence properties of various optimization algorithms, etc.).

ii. Cyanide & happiness. I found the comic a week ago or so and I like it. A few examples (click to view full size):

 

iii. From edge.org: What is life? A 21st century perspective, by Craig Venter. Not a bad way to spend an hour of your life.

iv. A list of free statistical software available online. There are a lot of those around!

v. An awesome retraction-story. The peer-review process is not always bulletproof:

“[Hyung-In Moon] suggested preferred reviewers during the submission which were him or colleagues under bogus identities and accounts. In some cases the names of real people were provided (so if Googling them, you would see that they did exist) but he created email accounts for them which he or associates had access to and which were then used to provide peer review comments. In other cases he just made up names and email addresses. The review comments submitted by these reviewers were almost always favourable but still provided suggestions for paper improvement.” (via Ed Yong)

vi. “In a study now in press in Neurobiology of Aging (download PDF copy here), we studied the effects of healthy aging on how the brain processes different kinds of visual information. Based on prior work showing that visual attention towards objects predominantly recruited regions of the medial temporal lobe (MTL), compared to attention towards positions, we tested whether this specialization would wither with increasing age.

Basically, we tested the level of brain specialization by comparing the BOLD fMRI signal directly between object processing and position processing. We looked at each MTL structure individually by analyzing the results in each individual brain (native space) rather than relying on spatial normalization of brains, which is known to induce random and systematic distortions in MTL structures (see here and here for PDF of conference presentations I’ve had on this).

Running the test with functional MRI, we found that several regions showed a change in specialization. During encoding, the right amygdala and parahippocampal cortex, and tentatively other surrounding MTL regions, showed such decreases in specialization.

During preparation and rehearsal, no changes reached significance.

However, during the stage of recognition, more or less the entire MTL region demonstrated detrimental changes with age. That is, with increasing age, those regions that tend to show a strong response to object processing compared to spatial processing, now dwindle in this effect. At higher ages, such as 75+, the ability of the brain to differentiate between object and spatial content is gone in many crucial MTL structures.

This suggests that at least one important change with increasing age is its ability to differentiate between different kinds of content. If your brain is unable to selectively focus on one kind of information (and possibly inhibit processing of other aspects of the information), then neither learning or memory can operate successfully.” (link)

August 28, 2012 Posted by | Biology, comics, Computer science, Lectures, Neurology, Statistics, Studies | Leave a comment

More Khan Academy stuff you should know about

It’s been a while since I’ve been to Khan Academy (actually getting the Kepler badge sort of killed my motivation for a while), but I revisited the site earlier today and I realized that they’ve launched a brand new computer science section which looks really neat. Intro video below:

August 27, 2012 Posted by | Computer science, Khan Academy | Leave a comment