Akismet is letting a lot of crap through the filter and I get angry when I see people post spam on my blog. So I’ve decided to change my comment policy – from now on, the first comment people make is withheld until I’ve approved it – and so after you’ve gotten one comment approved by me your future comments will appear immediately after you’ve posted them.
I do not know if people who’ve posted in the past (basically almost everybody who might decide to comment here on the blog have done this…) are also required to have a comment approved by me or not – I hope not, but in that case it’ll still be only the first comment.
I’d have preferred not to have had to make this change, but I can’t accept the fact that I have to delete spam comments manually on a daily basis – that’s completely unacceptable. And it is a rather minor change.
If you want to test whether the first-time approval is required for you even though you’ve posted comments here before, and/or you just want to get that first-time approval over with now, you can leave a semi-random comment below (you can make it a general feedback comment if you like…) and check if it appears immediately – I’ll be very lax about approving comments to this post from people who’ve posted here before, though the main rule still applies (‘don’t be a jerk..’). New readers are encouraged to introduce themselves here if they feel like it (I know some people reading along prefer to remain anonymous/hidden/invisible/whatever). If you have questions about comment policy this is probably the place to ask them – but before people start asking such questions I’ll point out that if you try to add value to the discussion and argue in good faith you have nothing to fear. In over half a decade I’ve only ever banned one  individual.
All that said, this is my blog – it’s my place. I post a lot of different stuff, and some of that stuff’s personal. This is not a sphere where you can just say whatever you like, though you’re very much encouraged to say and share interesting stuff as people sometimes (though much too rarely) do.
In completely unrelated news I’ve recently engaged in a discussion on gene-expression where I left a few comments here – I do not often post stuff elsewhere so I figured if people were interested and hadn’t seen this I might as well leave a link.
Sorry for the lack of updates – I have explained the reasons for my inactivity here, and it will have to last a bit longer.
I had an exam today – I passed, and it went quite well.
I plan on studying ~10 hours/day over the next week and so I will not have much time for blogging. You can expect me to get ‘back to normal’ in the last part of January (in another 10 days or so).
I have exams coming up and so I won’t post much during the next couple of weeks.
I’d like to point out to new readers in particular that I posted more than 200 posts last year. Even though posting frequency will be low for a little while, this is hardly an inactive blog in general.
(link). Some people would say that you should formulate the hypothesis before you start gathering data – and that’s what I’ll do now.
I guess this post is mostly for people like Plamus, but other people are very welcome to read along as well. I’ll start out with some introductionary remarks. I have an account on a chess website – playchess.com. It’s a neat site, I like it. They’ve recently introduced a new featured: A so-called ‘tactics trainer’. The way the tactics trainer works is by means of tactics sessions. Each tactics session features a number of chess problems you need to solve under a time constraint. You’ll never run out of problems; each time you’ve solved one problem (or answered incorrectly) a new one will pop up. Each session lasts about 6 minutes – some problems can be solved in a second or two, others might take more than a minute. The outcome of a session will depend upon the number of problems solved correctly, the ‘toughness’ of the problems solved or not solved and probably various other factors as well. Once you’ve finished a session, you’ll get a statistic on the number of correctly and incorrectly solved problems, the average time spent on each problem and the corresponding tactics performance rating. The performance rating will impact your combined tactics rating, which is a result of all previous sessions (it’s like a standard Elo rating system with frequent updating).
But why is the tactics trainer worth blogging about? Well, here’s the thing: Solving tactics problems is hard and it’s a cognitively demanding task. It takes brain power, and if your brain isn’t working 100 % you’ll do worse than if it did. I have often thought about how to model the effects of blood glucose variations on cognitive performance. I’ve thought about it because I know that blood glucose variation impacts my performance in various areas – it’s obviously the case, in extreme cases it’s extremely obvious. But what about the non-extreme cases? Blood glucose fluctuates a lot over the course of a day, and it’s not unlikely that such fluctuations also impact performance. But can those effects be quantified? So far it’s been difficult for me to figure out how one would set about doing that – one approach I’ve contemplated in the past was to use IQ-tests to measure performance as a function of blood glucose, but that idea was basically dead in the water in terms of getting the kind of results I’d like – an IQ-test takes a lot of time, it’s not always easy to compare scores across tests and you can’t do the same test over and over because the way the test is designed the validity of the results will be impacted if you repeat the test. Another problem is that the blood glucose level wouldn’t even be exogenous – to be in a state of deep concentration for a long time under stressful circumstances impacts blood glucose. What would be much better would be a shorter version of the test – like a relatively short test where a high level of concentration is required to perform well and where even small differences in performances as a result of blood glucose fluctuations can be measured and quantified. Remember the tactics trainer I was talking about? Yeah…
It seemed to me that using the tactics trainer sessions to gauge ‘mental ability’ as a function of blood glucose actually makes a lot of sense; it’s possible to run a lot of sessions over time, so n can potentially become large enough to actually make room for some non-silly results. There are always new and different problems available, and the comparability issue across tests disappears completely. Blood glucose values can be taken as exogenous as the sessions last only a very short amount of time. Performances are precisely measured.
I should make it clear from the start that the effect of blood glucose on performance is non-linear. Extremely low values impact performance, as do extremely high values – so in theory some kind of semi-inverse-u-shaped pattern should probably be expected. The actual relationship would not look very much like an inverse u both because the scales are asymmetric in terms of symptoms/(mmol/l deviation from the desired level) - a blood glucose of somewhere between 4-10 mmol/l is often considered ‘desirable’, but whereas a value of 0 will mean that you’re dead, a value of 14 will for many diabetics probably often not give any symptoms at all – and because the left hand side is truncated (as mentioned) whereas in practice the right hand side is not for well-treated patients.
I will make a simplifying assumption here that will save me a lot of work and arguably will not be all that problematic when interpreting the results. I’ll disregard the non-linearities in the data by removing all data problems related to performance effects to the left of the lower bound of the ‘desirable level’, and by assuming that the ‘true’ non-linear relationship between performance and blood glucose on the right hand side of the distribution can be approximated by a linear function without this causing too many problems. The way to deal with the “data problems related to performance effects to the left of the lower bound of the ‘desirable level’” will be to exclude from the sample all observations with a measured blood glucose below 4.0 mmol/l. My motivation for removing the lowest values is that it will always become obvious to me within a very short amount of time, when my blood glucose is that low, that there’s a significant performance effect. I know those effects very well, and I know that it’s a bad idea to delay treatment – blood glucose levels below that can quickly turn into a medical emergency. When thinking about performance-effects here, it seems to me to make a lot of sense to implicitly employ a two-state model framework and then use separate/different models to analyze the stuff that’s going on in the two states: State one is quite simple, that’s the hypoglycemia scenario mentioned. To ‘model’ this state is easy: The effects are almost universally real and significant, to an extent where even measuring them in the manner described here becomes borderline dangerous. State two: Euglycemia or hyperglycemia. In this state, performance is likely to be at least somewhat some to linearly decreasing in the blood glucose level. I’m mostly interested in performance effects which are not obvious to me and so that makes state two the more interesting state to consider; it’s also a lot more interesting because state one is relatively (though not that…) rare, whereas state two is the default state in which I spend most of my time. Regarding using a linear approximation to model the relationship in state two rather than the ‘true’ non-linear function: This may be problematic, but I know myself well enough to know that I don’t want to bother with non-linear models when I look at this stuff later; it’s a poor and underspecified model to begin with. The kind of question I’m asking here is far more along the lines of: ‘does it even make sense to assume that your cognitive profile is affected by blood glucose variation?’ than it is a question along the lines of: ‘how will a 2,6 mmol/l difference impact your likelihood of getting an A when taking an exam in course X?’
When it comes to the specifics of the data gathering process, I’ll do it this way: Unless I have symptoms of hypoglycemia – in which case I’ll not do the session in question, but rather treat the hypoglycemia – I’ll only measure the blood glucose after I’ve finished the session. If the blood glucose is below 4.0 mmol/l the results will not be included in the sample. For all other observations, I will list the performance rating of the tactics session and the blood glucose level.
I intend to test the hypothesis that there is a significant and negative effect on performance of the blood glucose level measured (higher blood glucose level -> lower performance rating).
If I get around to it, it might also be interesting to see if there are threshold effects at play. One threshold to consider might be a blood glucose level of 15.0 mmol/l.. The precise cut-off is semi-arbitrary, but not completely; this is close to the point where you start to be able to measure beginning ketonuria, and it’s probably also around this point where symptoms start to (maybe) appear. I write ‘maybe’ because the symptoms of high blood glucose are far more unreliable than the symptoms of low blood glucose, which is also why I’m interested in the related performance effects; when I have symptoms I know I’m not ‘at my best’, but diabetics are often not ‘at their best’ without getting any signals from the body to that effect. A threshold effect also makes sense to include because it’s far from likely that a linear model will catch all the stuff that’s going on here.
As a starting point, my stopping rule will be that I’ll stop collecting data once I have 300 observations. This is completely arbitrary, but you should always have a stopping rule. I take in the neighbourhood of 8 blood tests a day and some of them aren’t taken when I sit at my computer doing tactics chess exercises. If half of them are, however, I will have 300 observations in 2,5 months, i.e. around New Year (this is close to my exams, so I’ll surely not want to do a lot of non-work statistical modelling at that point – so it will be kept simple..). Maybe it will be worth considering doing more than one session per blood test in which case the data can be gathered a lot faster than that, but then problems related to blood glucose exogeneity may start to pop up. I haven’t done multiple sessions after each other before, so I don’t know if such an approach will impact the performance rating; it might, and if it seems to do that I’ll probably disregard such ‘shortcuts’.
Potentially I might improve my tactics abilities during the survey period (in this specific setting that would be a bad thing, because the parameters would then no longer be constant over time) but unless such an effect is very noticeable early on I’ll proceed as if my skill does not improve during the survey period. I’ll write down the starting tactics rating (which is sort of ‘an average of recent past performances’) as well as the tactics rating at the end of the project and compare the difference between the two with the estimated standard deviation of the observations to at least get an idea if there’s a potential big problem here; I don’t know if I’ll really care if a big problem turns up, but I should at least pretend to care about this ‘risk’ of getting better over time (and as an added bonus this is also a simple way to try to establish if doing tactics exercises helps you improve your tactics abilities significantly). The reason why I assume the ‘improvement over time’-effect to be minor here is mostly that I’m actually a reasonably strong player by now so the learning curve is presumably a lot flatter than it was in the past, meaning that exercises like these should not be expected to have that big an effect on my performance.
Yes, I did consider including other variables in the model (number of unsolved problems, time spent/problem), but a) they don’t add much additional information, b) they’re strongly correlated with the rating variable (so I would not be comfortable including them in the same model as the rating variable), and c) the more data I need to write down the more this will feel like work, and I don’t want it to feel like work. So there’ll also be no controls included, this is all just a ‘fun (not quick) and dirty’ project to have running for a while. I’ll release the (limited) data afterwards and let people play around with it if they like to.
Ideas and suggestions (which do not involve me doing a lot of extra work), as well as questions, are of course most welcome.
Incidentally, if you want to know if you’re good at figuring out how smart people are based on how they look, here’s another small-scale project you may be interested in (I have nothing to do with it as such, but I know the guy behind it).
Here’s how it looked like in 1950:
Here’s the population pyramid for Western Africa, 1950:
No, I didn’t copy the same image twice. When you’re at the site and click from one version to the other you can spot the difference, but it’s not easy if you’re just comparing the images even if you look carefully. Try to compare that ‘development’ with what happened in Western Europe. First 1950:
Notice the ‘hole’ in the middle? It looks really strange. I wonder what happened 30-35 years before 1950 that might have impacted birth rates so significantly… Here’s how the pyramid looked like in 2010:
The site has more.
ii. The case for personal responsibility?
iii. Vihart has a new cute doodling in math class video up:
iv. I want to play this game at some point (while in the presence of at least one female. Otherwise it’d probably just be weird). Any ideas on how best to implement elo-difference-related handicaps here?
v. I linked to the Vice Guide to North Korea a long time ago. By accident I came across the site again recently, and I liked this video:
vi. The short version of why I may not ‘work blog’ the paper I’m reading right now:
I may decide to blog it anyway and just talk my way around the math, I haven’t decided yet. Much of the stuff the paper covers is also covered to some extent in the paper I linked to earlier today, so that’s certainly a better place to start for people with a time constraint who are curious to know more about these things.
Incidentally while reading the second paper a hidden assumption that had crept into my first work blog post became apparent to me for some reason. I wrote that the article I covered was “an overview article that can be read by pretty much anyone who understands English”. This is not true and I should have known better. I measured the Gunning fog index of my own post about the article and that came out at about 15,2 or so (‘the index estimates the years of formal education needed to understand the text on a first reading’). Surely the article itself has a lower readability level than my blog post about it.
I know that most of you know this, but maybe it’s worth rehashing even so: I’m not a journalist, and I will generally neither think about nor care about how ‘readable’ my stuff, or the stuff I link to, is. That’s not to say I do not try hard to be very precise when it comes to terminology and choice of words and so on.
vii. This is an awesome video:
The future is now.
The new semester starts tomorrow, which means I’ll have less time for blogging than I have had over the last few months. The start of the new semester also roughly coincides with the beginning of a chess tournament I’ll be playing this autumn as well as a higher workload related to some board work I have. More work = less spare time = increased opportunity costs of blogging. I’ll likely have to cut down on my running as well; this week I ran ~42km but I don’t expect to be able to justify running more than two days a week once the semester starts, so that’s probably closer to an equilibrium of 25 km/week.
I’ve considered starting to blog stuff that’s covered in the lectures I attend, and I’d like to know if this is something people would be interested in. I think it would be a good way for me to try to make studying a more enjoyable activity – I like to blog, and if I can make studying fun this way it’s certainly worth considering. I considered doing it before a while ago as well, but I abandoned the idea back then primarily because of technical obstacles; wordpress is a horrible template to have to use when dealing with non-trivial mathematics. However I’ll at least think about finding a way to make this work, probably by using a communication strategy emphasizing conceptual understanding rather than ‘formalism’. Such a blogging strategy is not perfect in terms of what I would ideally like to achieve, because conceptual understanding will often not get you very far during an exam, however if we assume that the time spent blogging such things would have been spent blogging other stuff instead it’s probably still a good idea.
Given that I’ll have less time for blogging, I’ve also considered starting to repost older posts on this blog. There are a few reasons why this makes sense. For one thing, people rarely look at the archives and read the old material so it tends to just fade away into oblivion. Given the kind of stuff I used to post some people would probably say that I should be happy about that, but on the other hand I’ve deleted a lot of stuff so I have ‘less to fear’ now than I used to have. To just give you an idea about how much has happened, this post is post number 1579 to be posted here on the blog, and in terms of the posts that are still around it’s post number 1000 – I’ve deleted probably more than half of the posts in the archives which were more than 3 years old (this has incidentally lost me a huge amount of google traffic, but I never cared much about that anyway). With 1000 posts to pick from going back at least 5 years, if I systematically repost one post per week – I’m not planning to, but for the sake of argument let’s assume this – it will be roughly 20 years before this post gets ‘recycled’. A lot of the stuff I haven’t deleted is still quite bad, but there’s no denying that there’s probably some potential here even so. I’m telling myself that when evaluating the alternatives it’s worth remembering that the most likely alternative to a repost is ‘no update’, not ‘an okay post about X’.
I should note here that I recently accessed the blog from a public computer without being logged in and I noticed that there were adds displayed at the bottom of some of my posts. To place adds on what I have come to consider ‘my site’ was not my idea and I got very angry when I realized that wordpress was doing this. I had no idea about the existence of such adds before then as you don’t see them if you’re logged in. In case you were in doubt, I’d much prefer the adds not to be there – but I’m still conflicted about paying wordpress $30/year to stop them from filling my blog with adds.
i. Ironclad warship.
“An ironclad was a steam-propelled warship in the early part of the second half of the 19th century, protected by iron or steel armor plates. The ironclad was developed as a result of the vulnerability of wooden warships to explosive or incendiary shells. The first ironclad battleship, La Gloire, was launched by the French Navy in November 1859. [...]
The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel that carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible.
The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armored cruisers. [...]
From the 1860s to the 1880s many naval designers believed that the development of the ironclad meant that the ram was again the most important weapon in naval warfare. With steam power freeing ships from the wind, and armor making them invulnerable to shellfire, the ram seemed to offer the opportunity to strike a decisive blow.
The scant damage inflicted by the guns of Monitor and Virginia at Battle of Hampton Roads and the spectacular but lucky success of the Austrian flagship Ferdinand Max sinking the Italian Re d’Italia at Lissa gave strength to the ramming craze. From the early 1870s to early 1880s most British naval officers thought that guns were about to be replaced as the main naval armament by the ram. Those who noted the tiny number of ships that had actually been sunk by ramming struggled to be heard.
The revival of ramming had a significant effect on naval tactics. Since the 17th century the predominant tactic of naval warfare had been the line of battle, where a fleet formed a long line to give it the best fire from its broadside guns. This tactic was totally unsuited to ramming, and the ram threw fleet tactics into disarray. The question of how an ironclad fleet should deploy in battle to make best use of the ram was never tested in battle, and if it had been, combat might have shown that rams could only be used against ships which were already stopped dead in the water.“
This is how one of them looked like, click to view full size*:
ii. Allometry. John Hawks talked about this a bit in one of his lectures, I decided to look it up:
“Allometry is the study of the relationship of body size to shape, anatomy, physiology and finally behaviour [...] Allometry often studies shape differences in terms of ratios of the objects’ dimensions. Two objects of different size but common shape will have their dimensions in the same ratio. Take, for example, a biological object that grows as it matures. Its size changes with age but the shapes are similar. [...]
In addition to studies that focus on growth, allometry also examines shape variation among individuals of a given age (and sex), which is referred to as static allometry. Comparisons of species are used to examine interspecific or evolutionary allometry [...]
Isometric scaling occurs when changes in size (during growth or over evolutionary time) do not lead to changes in proportion. [...] Isometric scaling is governed by the square-cube law. An organism which doubles in length isometrically will find that the surface area available to it will increase fourfold, while its volume and mass will increase by a factor of eight. This can present problems for organisms. In the case of above, the animal now has eight times the biologically active tissue to support, but the surface area of its respiratory organs has only increased fourfold, creating a mismatch between scaling and physical demands. Similarly, the organism in the above example now has eight times the mass to support on its legs, but the strength of its bones and muscles is dependent upon their cross-sectional area, which has only increased fourfold. Therefore, this hypothetical organism would experience twice the bone and muscle loads of its smaller version. This mismatch can be avoided either by being “overbuilt” when small or by changing proportions during growth [...] Allometric scaling is any change that deviates from isometry. [...]
In plotting an animal’s basal metabolic rate (BMR) against the animal’s own body mass, a logarithmic straight line is obtained. Overall metabolic rate in animals is generally accepted to show negative allometry, scaling to mass to a power ≈ 0.75, known as Kleiber’s law, 1932. This means that larger-bodied species (e.g., elephants) have lower mass-specific metabolic rates and lower heart rates, as compared with smaller-bodied species (e.g., mice), this straight line is known as the “mouse to elephant curve”.
“An arthropod is an invertebrate animal having an exoskeleton (external skeleton), a segmented body, and jointed appendages. Arthropods are members of the phylum Arthropoda (from Greek ἄρθρον árthron, “joint”, and ποδός podós “leg”, which together mean “jointed leg”), and include the insects, arachnids, crustaceans, and others. Arthropods are characterized by their jointed limbs and cuticles, which are mainly made of α-chitin; the cuticles of crustaceans are also biomineralized with calcium carbonate. The rigid cuticle inhibits growth, so arthropods replace it periodically by molting. The arthropod body plan consists of repeated segments, each with a pair of appendages. It is so versatile that they have been compared to Swiss Army knives, and it has enabled them to become the most species-rich members of all ecological guilds in most environments. They have over a million described species, making up more than 80% of all described living animal species, and are one of only two animal groups that are very successful in dry environments – the other being the amniotes. They range in size from microscopic plankton up to forms a few meters long.”
Another way to put it – it’s these guys:
I thought the stuff on molting (Ecdysis) was interesting:
“The exoskeleton cannot stretch and thus restricts growth. Arthropods therefore replace their exoskeletons by molting, or shedding the old exoskeleton after growing a new one that is not yet hardened. Molting cycles run nearly continuously until an arthropod reaches full size. [...] In the initial phase of molting, the animal stops feeding and its epidermis releases molting fluid, a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. This phase begins when the epidermis has secreted a new epicuticle to protect it from the enzymes, and the epidermis secretes the new exocuticle while the old cuticle is detaching. When this stage is complete, the animal makes its body swell by taking in a large quantity of water or air, and this makes the old cuticle split along predefined weaknesses where the old exocuticle was thinnest. It commonly takes several minutes for the animal to struggle out of the old cuticle. At this point the new one is wrinkled and so soft that the animal cannot support itself and finds it very difficult to move, and the new endocuticle has not yet formed. The animal continues to pump itself up to stretch the new cuticle as much as possible, then hardens the new exocuticle and eliminates the excess air or water. By the end of this phase the new endocuticle has formed. Many arthropods then eat the discarded cuticle to reclaim its materials.
Because arthropods are unprotected and nearly immobilized until the new cuticle has hardened, they are in danger both of being trapped in the old cuticle and of being attacked by predators. Molting may be responsible for 80 to 90% of all arthropod deaths.“
It’s a long article, and it has a lot of good stuff (and lots of links).
iv. Scottish independence referendum, 2014. I did not know about this.
“In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. Coordination games are a formalization of the idea of a coordination problem, which is widespread in the social sciences, including economics, meaning situations in which all parties can realize mutual gains, but only by making mutually consistent decisions. [...]
A typical case for a coordination game is choosing the side of the road upon which to drive, a social standard which can save lives if it is widely adhered to. [...] In a simplified example, assume that two drivers meet on a narrow dirt road. Both have to swerve in order to avoid a head-on collision. If both execute the same swerving maneuver they will manage to pass each other, but if they choose differing maneuvers they will collide. [...] In this case there are two pure Nash equilibria: either both swerve to the left, or both swerve to the right. In this example, it doesn’t matter which side both players pick, as long as they both pick the same. Both solutions are Pareto efficient. This is not true for all coordination games”
I have not yet read all of the relevant material covering this subject in Heather, so I don’t know the extent to which he (or others) disagrees with Bury (who seems to be the main source of the article). But if you didn’t know there was such a thing as an Ostrogothic Kingdom in the first place, reading the article will probably not be a step in the wrong direction.
vii. Speleology. Yet another one of those areas of research you have probably never thought about:
“Speleology (also spelled spelæology or spelaeology) is the scientific study of caves and other karst features, their make-up, structure, physical properties, history, life forms, and the processes by which they form (speleogenesis) and change over time (speleomorphology). The term speleology is also sometimes applied to the recreational activity of exploring caves, but this is more properly known as caving, spelunking or potholing. Speleology and caving are often connected, as the physical skills required for in situ study are the same.
Speleology is a cross-disciplinary field that combines the knowledge of chemistry, biology, geology, physics, meteorology and cartography to develop portraits of caves as complex, evolving systems.”
I thought the article on troglobites (small cave-dwelling animals which live permanently underground and cannot survive outside the cave environment), which it links to, was interesting too.
* I decided to present the readers with an alternative way to post images on the blog, which I’m considering applying in the future. I have been made aware that the current modus operandi, posting pictures full-size in the posts, is not always optimal given the readers’ preferences regarding browsers and which tools with which to access the site (‘modern gadgets’ vs PC). I should make it clear that if you read this blog using a PC in a firefox browser with a pretty standard screen resolution, it looks fine. Because that’s how I access and view the site.
I am, and have been for a very long time, afraid that the blog will turn too much into a wall of text and I keep reminding myself that I should take active countermeasures to prevent this from happening. I don’t care that much about illustrations and images, but I know that many people do. Is this way of presenting images which I have applied in the post – relatively small thumbs which you can click if you want to see them in full size – (much) better than the alternative?
One more thing. I know that it’s quite possible that the reason stuff like images sometimes look like crap is because the chosen theme for the blog is not optimal. But I also know that the last time I changed the theme, everything went to hell and it took me days to handle the problems which the theme change caused. That was, mind you, at a point in time where the number of posts was less than a fourth of what it is today. If I change the theme, it affects at least every post I’ve written in the last 4 years. I have no idea how it will impact stuff like videos. So even if the theme is not optimal, changing it is not an option if I can avoid it.
It’s been a gradual process that started out last year, but I think I’m pretty much ‘there’ by now – at least I’ve come a long way. So what has happened?
Well, I’ve removed a lot of posts from the site. I’ve posted 1450 posts by now (this post is number 1450), and I’ve pulled 372 from the site altogether. I didn’t do all of that today or yesterday, this was a gradual process. Even though I’ve taken down a lot of stuff, there are still 1071 posts in the archives available for everyone to read. Most of the stuff I deleted was quite bad and a lot of the posts were posts I wrote during the first year (the blogging learning curve isn’t all that steep). That being said, I should be clear about the fact that ‘low quality’ was but one of three choice parameters under consideration. The other two parameters of interest were ‘political content’ and ‘personal content’. The last couple of days I dealt with the last one in a systematic way, as I also noted on the twitter.
Political stuff doesn’t much interest me anymore, and I used to have strong opinions about that stuff. If I hadn’t blogged in the past and I were about to start up a blog at this point in time, I’m quite certain I’d see no major need to, say, upload tape recordings of political discussions I had with other people 4-5 years ago to the archives of the blog for everybody to listen to at their leisure. The old low-quality political posts have only been in my archives for the last couple of years because I never came around to removing them; now I have. The selection mechanism hasn’t been all that fine-grained, so I’m sure there’s plenty of bad stuff still around and the fact that I’ve not pulled a political post should not be interpreted as ‘current me’ supporting the views expressed in the post – maybe I just never got around to removing it, maybe I overlooked it because I hadn’t categorized it properly, maybe it contained some data that alleviated the problem that the views expressed in the post were stupid, or perhaps I thought it would be weird if there was a gap of several months in the archives even though I’ve posted relatively regularly for most of the period I’ve been blogging, or…
Incidentally, I should probably take the time to note that ‘low quality’ and ‘political content’ were very much correlated post traits – far most of the posts I’ve taken down were political posts. Politics is the Mind-Killer and just because you think of yourself as an independent and reasonable person doesn’t mean that you don’t commit a lot of the same mistakes that all those other unreasonable people make all the time, in part because just like everyone else, you have a strong need to validate and justify the political views you subscribe to. See also this.
As for the last parameter, the personal stuff, there’s no arguing that I’ve written a lot of stuff here over time that I’d not want some random guy on the street to know about me. Maybe not a lot of posts, but if you include parameters like ‘post length’ and ‘size of comment section’ (comment sections which not rarely remained active for perhaps a week after the post was written) in the analysis, it actually turned out to be quite a bit of material. Much of the stuff I’ve taken down was the kind of stuff you’d not want somebody you don’t know very well but might want to get to know better in the future, like a potential future close friend or girlfriend, to have access to all at once right from the get-go – to have that person read stuff like that could easily end up colouring that person’s perception of me, perhaps irrevocably, causing him or her to get the wrong idea and think that I’m someone I’m actually not. “You have to dole out your crazy in little pieces, you can’t do it all at once.”
I’ve had this problem with the blog for some time now; there’d be this person or that (in Real Life) which I’d like to tell about it, but I’ve always felt that given what was currently there to be found in the archives, I really would not feel comfortable telling them about it. Now I’ve changed the equation by removing some of the most personal stuff here. As I also tweeted(?) earlier, if you’ve left a comment that you really liked or you’d like to review a discussion we had here that is no longer available, give me a heads-up and I’ll mail you (/or something like that). Again – I’ve deleted nothing, all of it is still ‘in here’. The obvious alternative to this solution model was a two-tier posting system, where some posts would be password protected and others (most) would be available for all to read. I didn’t like that model, but maybe I’ll change my mind about that later on.
Given that people like to comment on the personal posts, the fact that I’ve taken down quite a few of those also means that the number of comments has probably dropped significantly, and that the blog looks less active than it used to do. A week ago the blog had approximately 2000 comments in the archives – now that number has been reduced significantly. That’s a shame, but I hope you guys will still comment here in the future despite the fact that some of the stuff you’ve written in the past has now been taken down. Anyway – comment sections are discussion fora, not history books.
One last change I’ve made is to drastically reduce the number of categories. There are currently 342 categories in the sidebar to your right which is arguably still way too much, but when I started this process there were more than 700. I hope this will make the blog a little easier to navigate.
First of all, I’ve made a decision to try not to post too often over the next 2 months. Blogging takes time, doing stuff that’s blog-worthy takes time. I have a couple of important exams coming up in January. I should not be spending any of my time on the stuff that I blog about here before those exams are behind me.
Next, a few links.
““Scientists discover gene for autism” (or ovarian cancer, or depression, cocaine addiction, obesity, happiness, height, schizophrenia… and whatever you’re having yourself). These are typical newspaper headlines (all from the last year) and all use the popular shorthand of “a gene for” something. In my view, this phrase is both lazy and deeply misleading and has caused widespread confusion about what genes are and do and about their influences on human traits and disease.” [...]
“While geneticists may know what they mean by the shorthand of “genes for” various traits, it is too easily taken in different, unintended ways. In particular, if there are genes “for” something, then many people infer that the something in question is also “for” something. For example, if there are “genes for homosexuality”, the inference is that homosexuality must somehow have been selected for, either currently or under some ancestral conditions. Even sophisticated thinkers like Richard Dawkins fall foul of this confusion – the apparent need to explain why a condition like homosexual orientation persists. Similar arguments are often advanced for depression or schizophrenia or autism – that maybe in ancestral environments, these conditions conferred some kind of selective advantage. That is one supposed explanation for why “genes for schizophrenia or autism” persist in the population.
Natural selection is a powerful force but that does not mean every genetic variation we see in humans was selected for, nor does it mean every condition affecting human psychology confers some selective advantage. In fact, mutations like those in the neuroligin genes are rapidly selected against in the population, due to the much lower average number of offspring of people carrying them. The problem is that new ones keep arising – in those genes and in thousands of other required to build the brain. By analogy, it is not beneficial for my car to break down – this fact does not require some teleological explanation. Breaking down occasionally in various ways is not a design feature – it is just that highly complex systems bring an associated higher risk due to possible failure of so many components.
So, just because the conditions persist at some level does not mean that the individual variants causing them do. Most of the mutations causing disease are probably very recent and will be rapidly selected against – they are not “for” anything.”
I have made a similar point in the past, probably more than once.
iii. Stuff you didn’t know about mine fires.
“Whether started by humans or by natural causes, coal seam fires continue to burn for decades or even centuries until either the fuel source is exhausted; a permanent groundwater table is encountered; the depth of the burn becomes greater than the ground’s capacity to subside and vent; or humans intervene. Because they burn underground, coal seam fires are extremely difficult and costly to extinguish, and are unlikely to be suppressed by rainfall. There are strong similarities between coal fires and peat fires. [...] Many recent mine fires have started from people burning trash in a landfill that was in proximity to abandoned coal mines, including the much publicized Centralia, Pennsylvania, fire, which has been burning since 1962. Of the hundreds of mine fires in the United States burning today, most are found in the state of Pennsylvania. [...] It is estimated that Australia’s Burning Mountain, the oldest known coal fire, has burned for 6,000 years.”
In case you were in doubt, “Extinguishing underground coal fires, which sometimes exceed temperatures of 540°C (1,000°F), is both highly dangerous and very expensive.”
This is probably as good a place as any to once again remind old readers, and to let new ones in on this fact, that this blog is not one of those blogs that’ll just ‘die’ without an explanation. If I decide to close the blog down, I’ll tell you. If I haven’t told you anything and I also don’t update either the blog or my twitter in weeks, the most likely explanation is that I’m dead or something along those lines.
Just thought I’d let you know so you don’t miss that one. If you like or dislike a post, you no longer need to leave a comment to tell me that (though I do like comments, so don’t hold back on that account..). At the bottom of each post, there now is a “Rate This”-function where you can give a post 1-5 stars, depending on how much you liked it. Please consider using this feature, it takes less work than writing a comment and the feedback is much appreciated. And please consider using the whole scale, rather than just the four or five star option – a one-star evaluation is valuable information for me too.
Incidentally, you can’t see the rating option from the main site (econstudentlog.wordpress.com), so you’ll have to click the specific post you want to evaluate in order to do so.
I’ll go home to my parents tomorrow morning and if I don’t react to comments or post new stuff here in the next couple of days, that’s the reason.
There’ll be no updates on my part here for the next days, neither comments nor new posts. Perhaps I’ll post again Friday or Saturday.
Last exam this semester is getting close and I’m really busy (or at least I ought to be. Either way…). A few wikipedia links (no descriptions, would rather keep this brief):
Here’s the link, below a few quotes to illustrate what kind of book this is:
1) “Theorem 3: Let A, B, and C be finite sets. Then |A∪B∪C| = |A|+|B|+|C|-|A∩B|-|B∩C|-|A∩C|+|A∩B∩C|.”
2) “Existential quantification may be applied to several variables in a predicate and the order in which the quantifications are considered does not affect the truth value. For a predicate with several variables we may apply both universal and existential quantification. In this case the order does matter.”
3) “Theorem 2 The Extended Pigeonholde Principle: If n pigeons are assigned to m pigeonholes, then one of the pigeonholes must contain at least ⌊(n-1)/m⌋ + 1 pigeons.
Proof ( by contradiction ) If each pigeonhole contains no more than ⌊(n-1)/m⌋ pigeons, then there are at most m * (n-1)/m = n-1 pigeons in all. This contradicts our hypothesis, so one of the pigeonholes must contain at least ⌊(n-1)/m⌋ + 1 pigeons.”
The above perhaps points to part of the reason why I haven’t quoted from the book before. Given that the exams are getting closer every day, it’s unlikely that I’ll do much more reading in this book (or perhaps any non-directly study-related book) in the next month’s time. The book contained a few remarks on ideas as to how to construct proofs in chapter 2, which though most of the ideas were familiar to me are not completely exam-irrelevant. Pretty sure most of the other stuff is. Though I’ll perhaps not get a lot of non-exam relevant reading done I’ll try to keep blogging over the coming weeks, I’ve almost returned to ‘one post/day’ and I like that very much though it’s uncertain if I can keep up that kind of activity level in the longer run.
Below are some domain names of some of the visitors coming by this blog since Thursday(? yes, I think it was Thursday I started looking at this, though maybe it was Friday), provided by Sitemeter:
http://www.wvu.edu/ (West Virginia University)
http://nih.gov/ (National Institute of Health)
http://www.justice.gov/ (The United States Department of Justice)
aau.dk (Aalborg University)
http://www.ucn.dk/ (University College Nordjylland)
http://www.cornell.edu/ (Cornell University)
http://ifl.net/ (“RM Internet For Learning is the specialist Internet Service provider (ISP) to UK education establishments.”)
au.dk (University of Aarhus)
hhknet.dk (Danish Network for Research and Education)
http://www.uni-hamburg.de/ (Universitaet Hamburg campus net)
http://wiu.edu/ (Western Illinois University – if I were to consider ever applying to an American university, that front page would have me running for the hills. Don’t really know why.)
Most of the people who’ve visited the site via those domains are one-time visitors who do not return. I have got visits more than once from only a few of these domains. They don’t make up a lot of the total traffic (close to(?) none during the weekend where people are far more likely to be using their private internet services), but they still make up quite a bit more of the total traffic than I’d assumed. Just like all other one-time visitor group members though, most of these visitors don’t give the site 10 seconds before moving on.
Three spam comments from yesterday’s selection (links removed):
1. I do not think I have seen this described in such an informative way before. You actually have made this so much clearer for me. Thanks!
2. Hey, I found your blog in a new directory of blogs. I dont know how your blog came up, must have been a typo, anyway cool blog, I bookmarked you.
3. Your site was extremely interesting, especially since I was searching for more info on this just sa few days ago.
Of course there are multiple strategies for how to construct the optimal spam comment. This spammer (it’s the same program doing the ‘commenting’ in all three cases) obviously consider it (more) likely that bloggers will approve comments like these, increasing the odds of someone clicking the links, because people will often give someone who compliment them the benefit of the doubt. I’ll consider the fact that I’ve seen spam comments such as these a lot of times before a sign that they work. People like to be told nice things about themselves. It doesn’t really matter all that much who – or for that matter what – the messenger is.
I have moved, and I don’t have an internet connection at my new place yet. So I probably won’t update this blog for a while.
Whiteberg already linked to the properly named failblog, but it’s a really funny site, and in case you don’t read his blog or didn’t follow the link, now I give you a second chance to get to know about the blog.
Here’s one example (and let me tell you: it is very difficult to pick only one):
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- Flemming Rose
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly