# Econstudentlog

## Workblog

It takes way more time to cover this stuff in detail here than I’m willing to spend on it, but here are a few relevant links to stuff I’m working on/with at the moment:

iv. Chow test.

vi. Education and health: Evaluating Theories and Evidence, by Cutler & Muney.

vii. Education, Health and Mortality: Evidence from a Social Experiment, by Meghir, Palme & Simeonova.

April 30, 2013 Posted by | econometrics, economics, papers, personal | Leave a Comment

## What I’m currently working on…

I haven’t really work-blogged anything substantial this semester so far and I’ve felt a bit guilty about that. Today on my way home from lectures I decided that one thing I could do, which wouldn’t take a lot of work on my part, was to just upload my notes taken during a lecture.

The stuff uploaded below is one and a half hour (2 lectures, each lasting 45 minutes) of my life, roughly. It wasn’t the complete lecture as the lecturer also briefly went through an example of how to do the specific maximum likelihood estimation and how to perform the Smith-Blundell procedure on a data set in a statistical program called Stata. On the other hand it’s more than 2 hours of my life because I also had to prepare for the lecture…

I know that people who’re not super familiar with mathematical models generally tend to assume that ‘the level of complexity’ dealt with in mathematical expressions is somehow positively correlated with (‘and thus causally linked to…’) the ‘amount of algebra’ (‘long equations with lots of terms are more complicated and involves more advanced math than short equations with few terms’). In general that’s not how it works. The stuff covered during the lecture was corner solution response models with neglected heterogeneity and endogenous variables; it may look simple as there’s a lot of of ‘a+b type stuff’, but you need to think hard to get things right and even simple-looking steps may cause problems when you’re preparing for exams in a course like this. Non-linear models with unobserved variables isn’t what you start out with when you learn statistics, but on the other hand this was hardly the most technical lecture I’ve had so I figured it sort of made sense to upload this; I added quite a few comments to the equations written on the blackboard which should make stuff easier to follow.

Anyway I figured at least one or two of you might find it interesting to ‘have a look inside the classroom’ (you can click the images to view them in a higher resolution):

April 5, 2013

## Back to work…

I’ve not had lectures for the last two weeks, but tomorrow the new semester starts.

Like last semester I’ll try to ‘work-blog’ some stuff along the way – hopefully I’ll do it more often than I did, but it’s hard to say if that’s realistic at this point.

I bought the only book I’m required to acquire this semester earlier today:

…and having had a brief look at it I’m already starting to wonder if it was even a good idea to take that course. I’ve been told it’s a very useful course, but I have a nagging suspicion that it may also be quite hard. Here are some of the reasons (click to view in a higher resolution):

I don’t think it’s particularly likely that I’ll cover stuff from that particular course in work-blogs, for perhaps obvious reasons. One problem is the math, wordpress doesn’t handle math very well. Another problem is that most readers would be unlikely to benefit much from such posts unless I were to spend a lot more time on them than I’d like to do. But it’s not my only course this semester. We’ll see how it goes.

February 4, 2013 Posted by | econometrics, economics, statistics | Leave a Comment

## Making Choices in Health: WHO Guide to Cost-Effectiveness Analysis

You can buy the book here, though I should note that I’m certain that free versions of the book are also available online. I started reading it yesterday and I completed it today.

The book consists of two parts: Part one deals with “Methods for Generalized Cost-Effectiveness Analysis” and part two consists of “Background Papers and Applications”. If you’re weird, like me, (or if you’re a researcher in the field…) you’ll want to read both parts. They write in the introduction that: “The main objective of this Guide is to provide policy-makers and researchers with a clear understanding of the concepts and benefits of GCEA [generalized cost-effectiveness analysis]. It provides guidance on how to undertake studies using this form of analysis and how to interpret the results.” As mentioned the book has two parts. It’s very clear that part one is written mainly for the politicians and that part two is written for the researchers – and good luck finding a politician who’ll actually read part 2 (/or part 1..?). I like to think that part one can be read and understood by most people, including certainly most readers of this blog, and I do not believe it requires a lot of knowledge about statistics or mathematics; some papers in part 2 on the other hand require math beyond the level I’ve taken for the reader to understand all the steps taken (here are a few wikipedia articles I had a look at while reading this part of the book). They repeat themselves a bit here and there, but it’s not hard to just skim passages containing stuff you’ve already dealt with elsewhere.

It should be noted that although some of it is a bit technical, there’s some good stuff in part 2 as well – for instance I really liked this table (from the fourth study in part 2, Econometric estimation of country-specific hospital costs):

Click to view full size. The obvious conclusion to draw here is that costs do not vary much across countries – no, they definitely do not… Actually I was very surprised to learn that there’s a huge amount of variation even within countries – in the same article they note that: “it must be emphasized that there is wide variation in the unit costs estimated from studies within a particular country [...] These differences are sometimes of an order of magnitude, and cannot always be attributed to different methods. This implies that analysts cannot simply take the cost estimates from a single study in a country to guide their assessment of the cost-effectiveness of interventions, or the costs of scaling-up. In some cases, they could be wrong by an order of magnitude.”

In the first chapter they state that:

“It appears that the field can develop in two distinct directions, towards increasingly contextualized analyses or towards more generalized assessments. Cost-effectiveness studies and the sectoral application of CEA [cost effectiveness analyses] to a wide range of interventions can become increasingly context specific—at the individual study level by directly incorporating other social concerns such as distributional weights or a priority to treat the sick and at the sectoral level by developing complex resource allocation models that capture the full range of resource, ethical and political constraints facing decision-makers.
We fear that this direction will lead ultimately to less use of costeffectiveness information in the health policy dialogue. Highly contextualized analyses must by definition be undertaken in each context; the cost and time involved as well as the inevitable complexity of the resource allocation models will limit their practical use. The other direction for sectoral cost-effectiveness, the direction that WHO is promoting [...] is to focus on the general assessment of the costs and health benefits of different interventions in the absence of various highly variable local decision constraints. A generalized league table of the cost-effectiveness of interventions for a group of populations with comparable health systems and epidemiological profiles can make the most powerful component of CEA readily available to inform health policy debates. Relative judgements on cost-effectiveness—e.g. treating tuberculosis with the DOTS strategy is highly cost-effective and providing liver transplants in cases of alcoholic cirrhosis is highly cost-ineffective—can have wide ranging influence and, as one input to an informed policy debate, can enhance allocative efficiency of many health systems.”

I’m not a health economist so I have no idea which way the field has developed since the book was written. The book isn’t exactly brand new (it’s from 2003) and so I figured one way to probe whether the recommendations have been followed in the years after the book was published was to try to figure out the extent to which one of the big ideas here, the use of Stochastic League Tables in CEAs, has been implemented. So I went to google scholar and searched for the term – and it gave me 7400+ results (and 589 since 2012). It seems to me that the use of these things at least have caught on. I incidentally have no idea to which extent researchers have now moved towards the use of GCEAs and away from the previously (?) widely used ‘incremental approach’ studies when performing these analyses. I posted the long quote above also to caution people unfamiliar with the literature against complaining about CEAs which are ‘not specific enough’ (a complaint I’ve made myself in the past…) – it may make a lot of sense to not make a CEA too specific, in order to make it more potentially useful to decisionmakers. A related point is that the idea of using CEAs in a formulaic way to decide which health interventions ‘pass the bar’ and which do not, and thus base decisions such as which health interventions should receive government support only on the outcome of CEAs, do not have much support in the field – as they put it in Murray, Lauer et al. (study 7 in the second part):

“The results of cost-effectiveness analysis should not be used in a formulaic way—starting with the intervention that has the lowest cost-effectiveness ratio, choosing the next most attractive intervention, and continuing until all resources have been used (10). There is generally too much uncertainty surrounding estimates for this approach; moreover, there are other goals of health policy in addition to improving population health. The tool is most powerful when it is used to classify interventions into broad categories such as those we used. This approach provides decision-makers with information on which interventions are low-cost ways of improving population health and which improve health at a much higher cost. This information enters the policy debate to be weighed against the effect of the interventions on other goals of health policy.”

(They also emphasize this aspect in the first part of the book). I could quote a lot of stuff from the book, but if you’re interested you’ll read it and if you’re not you’d probably not read my quotes either. If you’re interested in cost-effectiveness analyses, I think you should probably read this book – or at least the first part which is relatively easy and does not take that long to read. If you’re not interested in this stuff you should definitely stay away from it. But I think the book is a good starting point if you seek to understand some of the main concepts, issues, and tradeoffs involved when doing and interpreting CEAs.

One last thing I should note, primarily to the people who will not read the book: Many people think of the people doing stuff like cost-effectiveness analyses in this field as the bad guys. That’s because they’re the ones who keep reminding us that we can’t afford everything. When it comes to health care we don’t like to be reminded of this fact, because sometimes when it’s been decided by decisionmakers that public money should not be spent on X it means that someone will die. What I’d like to remind you of is that resource constraints don’t go away just because people prefer to ignore them; rather, when people disregard cost-effectiveness it may just mean that fewer people will be helped and more people will die than if a different course of action, perhaps the one suggested by a CEA, had been taken. CEAs may not provide the complete answer to how we should do these things and they have some limitations, but we should all keep in mind that it matters how we spend our money on this stuff, and that completely ignoring the resource constraint isn’t really a solution to the problems we face when dealing with these matters.

January 30, 2013 Posted by | books, economics, health, health care | Leave a Comment

## “So you see, it’s really quite simple…”

“…it’s just a matter of estimating the hazard functions…”

Or something like that. The words in the post title the instructor actually said, but I believe his voice sort of trailed off as he finished the sentence. All the stuff above is from today’s lecture notes, click to enlarge. The quote is from the last part of the lecture, after he’d gone through that stuff.

In the last slide, it should “of course” be ‘Oaxaca Blinder decomposition’, rather than ‘Oaxaca-Bilder’.

December 11, 2012 Posted by | economics, econometrics | Leave a Comment

## Economics as a soft science

What we’re covering right now in class is not something I’ll cover here in detail – it’s very technical stuff. A few excerpts from today’s lecture notes (click to view full size):

Stuff like this is why I actually get a bit annoyed by people who state that their impression is that economics is a relatively ‘soft’ science, and ask questions like ‘the math you guys make use of isn’t all that hard, is it?’ (I’ve been asked this question a few times in the past) It’s actually true that a lot of it isn’t – we spend a lot of time calculating derivatives and finding the signs of those derivatives and similar stuff. And economics is a reasonably heterogenous field, so surely there’s a lot of variation – for example, in Denmark business graduates often call themselves economists too even though a business graduates’ background, in terms of what we’ve learned during our education, would most often be reasonably different from e.g. my own.

What I’ll just say here is that the statistics stuff generally is not easy (if you think it is, you’ve spent way too little time on that stuff*). And yeah, the above excerpt is from what I consider my ‘easy course’ this semester – most of it is not like that, but some of it sure is.

Incidentally I should just comment in advance here, before people start talking about physics envy (mostly related to macro, IMO (and remember again the field heterogeneity; many, perhaps a majority of, economists don’t specialize in that stuff and don’t really know all that much about it…)), that the complexity economists deal with when they work with statistics – which is also economics – is the same kind of complexity that’s dealt with in all other subject areas where people need to analyze data to reach conclusions about what the data can tell us. Much of the complexity is in the data – the complexity relates to the fact that the real world is complex, and if we want to model it right and get results that make sense, we need to think very hard about which tools to use and how we use them. The economists who decide to work with that kind of stuff, more than they absolutely have to in order to get their degrees that is, are economists who are taught how to analyze data and do it the right way, and how what is the right way may depend upon what kind of data you’re working with and the questions you want to answer. This also involves learning what an Epanechnikov kernel is and what it implies that the error terms of a model are m-dependent.

(*…or (Plamus?) way too much time…)

October 30, 2012 Posted by | econometrics, economics | 2 Comments

## Work Blogging 3

I’ve gotten behind on this stuff, but I hope to post a few posts this week – we’ll see.

In my last post on this subject I said that the next paper I’d be covering was Pissarides Short-Run Equilibrium Dynamics of Unemployment, Vacancies, and Real Wages, but it turns out that I got the course reading order mixed up and that the Pissarides paper actually came before the Andersen & Svarer paper I covered in my second post. Basically the Pissarides paper was used to introduce us to the DMP-model [Diamond Mortensen Pissarides-] framework, whereas Andersen and Svarer was meant to tell us how that model setup is applied in research today. It would have been very hard to read and understand the A&S paper without implicitly also ending up having a pretty good idea what is going on in Pissarides, which means that there isn’t much point in covering this paper here. Though there are a few technical differences between the models applied it’s the same model framework the papers make use of. Pissarides is also a rather short paper, so there isn’t that much new stuff to talk about which I’ve not already touched upon to some extent when covering A&S.

Maybe a few general aspects should however be touched upon here briefly before I move on, if only so that I can remember this stuff later. One thing to note is the accumulation of rents which are associated with the labour market friction in these models; the free entry assumption means that the expected value from creating a vacancy by a firm is driven to zero in equilibrium, but the value of a filled job is greater than zero (for reasonable parameter values). Another thing to note is that whereas there are search costs introduced into the labour markets in these models (realism: +1 compared to the alternatives), they still make use of some key simplifying assumptions (realism: -(?)) – assumptions which may be driving some of the results of the models. Some normal structural assumptions that are used in these models are: i. additively separable utility functions + ii. Cobb–Douglas matching functions – we need these assumptions to solve the models, but they might be problematic. Perhaps it’s also worth noting here that we tend to think of the labour market as uncoordinated in these models; basically firms and jobs are pretty much the same thing. So free entry of firms means that a new job vacancy will get opened if the expected value of that vacancy is positive, but we don’t care if that vacancy is made by a firm with 500 employees or one with two employees. In the real world, stuff like labour market centralization, unionization and similar stuff impact search costs/matching dynamics of both workers and firms.

So anyway, I’ve decided below to cover Boone, Fredriksson, Holmlund and Ours’ Optimal unemployment insurance with monitoring and sanctions. That paper also applies a DMP model framework. It briefly covers/contrasts its results with Becker’s 1968 paper on crime, because it’s sort of the starting point for this literature. The very short version is that Becker argues in his paper that by raising the sanction in a model with risk-neutral agents, monitoring costs can be reduced without affecting the incentives for crime. So when monitoring is costly and punishment is free (which it arguably is in the case of fines, which impose no cost as such to society as they’re just a transfer payment from one group to another), the optimal level of monitoring will go toward zero and the optimal punishment will increase rapidly. However the (Boone) paper points out that when risk aversion is introduced into the model, Becker’s result no longer holds, and if the monitoring technology is plagued by type II errors so that some complying individuals are sanctioned, the welfare losses from these errors may be severe. They conclude that “a system with monitoring and sanctions represents a welfare improvement relative to other alternatives for reasonable estimates of the monitoring costs. In particular, the monitoring and sanction system leads to higher welfare than a system with time limits.”

As in A&S, there are three groups of interest; employed, unemployed and activated/sanctioned. A key difference between Boone et al. and A&S is that in the latter activation was a random sanction, whereas in the former the sanction rate is now dependent on the search intensity of an individual (the variable s in the paper is the search intensity). In order to make the sanction rate dependent on search intensity, it is of course necessary to add (costly) monitoring to the model. Like in A&S, the utility level of sanctioned/activated individuals (receiving a benefit UA = Z = zw) is lower than the utility level of an unemployed worker (receiving a benefit UI = B = bw). However the precise way this utility differential comes about is another of the key differences between their setup and the A&S setup; in this paper, sanctions hits income directly, not leisure; when people are sanctioned, rather than obtaining a lower utility level through an implicit tax on leisure, they simply suffer a direct negative income shock – i.e. Z < B.

They mention early on (p.402) that they consider a system with four policy variables of interest: The level of unemployment benefits [B] and unemployment assistance [Z] (the difference between the two is the sanction), the rate of monitoring of people receiving UI benefits [μ] and the precision of the monitoring technology [σ]. Given that, you’d expect the policy variables to be Z, B, μ and σ. Guess again. b, p, μ and σ are the four instruments involved here. Using those variables amounts to the same thing though [ p = 1 - (z/b) - you can think of p as a 'penalty' - and B = b*w]. Incidentally, a level of σ = 0 implies an Andersen & Svarer-type model, where sanctioning is random; the higher σ is, the more precise is the monitoring technology. Also note that the monitoring costs per individual monitored is increasing in σ – i.e. the more precise the monitoring technology, the more expensive the monitoring system is per person monitored.

φ is the job separation rate, α is the exit rate from unemployment to employment, θ = v/S is the labour market tightness, π(s) [s(-e-upperbar) - this is part of why I hate to write this in wordpress)] is the probability of being sanctioned given search effort e… The probability of being sanctioned depends linearly on search. It’s standard search-matching stuff with a matching function depending on θ, workers that optimize value functions (log-utility, before 3.3.3. where they introduce a different risk aversion specification as well – see below..) over search effort, firm side similar to standard DMP and wage determination through Nash Bargaining with bargaining power β…). As mentioned earlier, monitoring is costly and this is another aspect where the paper is different from A&S’; here, the government uses a wage tax on the employed to finance benefits of unemployed and sanctioned, as well as the monitoring that is required. There’s an additively separable welfare function which depends on the utility levels of the three groups in the economy which can be optimized over the policy variables of interest subject to the budget constraint – I won’t go into details about this stuff but rather focus on the conclusions.

Two analytical results can be obtained from the model: The first one is that the optimal policy involves a p > 0. Recall that p satisfies z = ( 1 – p ) * b, which means that if p is equal to zero, there’s no difference in the benefit level of people who are sanctioned and people who are not. An optimal p > 0 means that it’s optimal for sanctioned individuals to have a lower income than non-sanctioned individuals. There are two key mechanisms driving the result: A taxation externality and an entitlement effect. It’s a combination of the fact that in the model, sanctioned individuals don’t take into account that if they increase search, the government will be able to finance the same level of insurance with a lower tax level; and the fact that if the government increases the penalty, it will increase the search effort of sanctioned individuals because increasing search effort will make them more likely to become entitled to UI benefits.

The second result relates to the question of whether introducing monitoring and sanctions into a model with time limits will be optimal. Here we have the usual problem with tradeoffs: The simple answer is that this is not always the case; it’s the case only when the benefits from introducing the scheme exceeds the costs. The benefits from the scheme relates to the search incentives of unemployed, the costs relate to the monitoring activities which need to be financed. In the simulations, they basically find that it’ll almost always be welfare improving to introduce monitoring and sanctions.

Introducing a different (CRRA-) specification of risk aversion where the degree of relative risk aversion is less than one [ 1 − ζ ] (see also here) doesn’t change the conclusions in the paper; stronger risk aversion strengthens the case for monitoring and sanctions. They introduce preference heterogeneities in the last part of the paper, by introducing random shocks to the value of leisure. Here there are four states instead of two; unemployed and sanctioned individuals in either state one or state two. State one is the default state we’ve previously operated with, whereas state two is a state where search effort becomes prohibitively costly for the affected individual. Individuals transition randomly across states, however the transition rate from state two to employment is zero. Unsurprisingly the welfare gains from introducing monitoring and sanctions in this model are smaller than in the baseline case.

The paper briefly mentions that sanctions are much more widespread in the US than in Europe; we’ve covered that in more detail in the lectures. US sanction rates are often in the order of 30 percent, whereas as an example the Danish sanction rate was around 0,3% from 2004-2006. Sanctions are relatively rare in Denmark, and most of them belong in the mild category (a few days’ income, rather than complete loss of income over an extended period of time). However it’s worth mentioning here that if you also think of the Danish activation requirements as (random) sanctions as well, the pattern looks different and the sanction rates differ less; as mentioned before, the Danish government spends a lot of money on activation measures compared to most other countries, and an unemployed Dane is far more likely to go into activation than is e.g. an unemployed person from the U.S.

October 24, 2012 Posted by | economics, personal | Leave a Comment

## Work blogging 2

As indicated the second paper on my reading list is not as easy to cover here as was the first one. It’s quite a bit more technical than the first paper was and so there’s a lot of stuff which is harder to cover. The paper covers many of the same themes the first paper does (it’s written by the same people and published around the same time), but it handles some of the aspects in far more detail. The modelling will probably be a bit hard to understand if you’ve never worked with economic models before but I’ve tried to outline in the post what at least some of the model-building stuff that’s going on is aiming at.

I have decided that I want to try to blog at least all of the material this specific course in question deals with; I haven’t yet figured out if it’ll make sense to try to ‘workblog’ the other stuff I’m doing this semester yet but it takes time to write these posts and so I can assure you that I’ll not try to cover everything I’m supposed to learn this semester. In the post I’ve decided to just write some relevant stuff about the various aspects of the models and -results presented in the paper and keep it relatively superficial (I’ve included nothing which relates to the stuff in the appendix) – hopefully you’ll understand a bit about what’s going on. Of course I mainly write these posts for myself – I know that I learn stuff from writing these posts – but please don’t forget that I’m actually also providing you guys a valuable service here; the last post I wrote was a condensed version of an almost 40 page paper which took me at least a few hours to read and prepare notes for which you could read in just, what, 5-10 minutes?

So, this new paper – what’s it all about? In the paper there’s some introductionary stuff which closely relates to the previous paper, there’s some theoretical model-work, and then there’s a part which handles some model simulations and numerical illustrations. I’ve spent most of my time with the model-work (both in the post and when working with the paper previously). A key policy challenge for decisionmakers is to find and settle for a ‘proper’ balance between incentives and insurance in the labour market, and part of what this paper does is to have a closer look at different aspects of workfare in order to figure out how workfare is likely to affect incentive structures in the labour market and thus labour market outcomes. If you haven’t read the first post, you should probably start there before going any further. As in the last paper, the (now no longer implicit) model operates with three groups: Employed, unemployed and people in activation. Again you have a threat effect, a lock-in effect and a wage effect. The paper disregards human capital considerations so the post-programme effect is absent and state dependence is not addressed. The modelling framework takes benefit levels as a given and then proceed to question whether workfare elements can change insurance/incentives-aspects of the model and improve labour market performance. In the previous paper I did not feel that it was completely clear how the different workfare dimensions worked and how they differed, but in the formalized presentation here it is made very explicit; the two main policy instruments are i) the probability that an unemployed person will be required to participate in an activation measure [P(au)] and ii) the activation work requirement [l(a)]. The latter refers to how much work you’re required to do while activated – the larger this is, the more time/effort you’re required to spend on activation. One way to think about it is that one is the probability of X and the other the effect size of X. Going away from a unidimensional workfare requirement isn’t just something they do ‘to add complexity to the model’; it is shown in the paper that the two variables can be expected to affect different groups in different ways and it is emphasized for this reason that the overall effects of various changes in the workfare requirements depend critically on the total policy package and the specific mixing of the two policy variables.

The utility functions are standard leisure-income specifications and the main variable of interest in the analysis is the search effort (and how this relates to unemployment). As already mentioned the model work illustrates (in more detail) the effects also covered in the previous paper, for example the threat effect [∂S(u)(∂l(a))>0, ∂S(u)/∂P(au)>0], and it also illustrates much more precisely the reasons why using a multidimensional specification of the workfare scheme is important when evaluating the effects of changes to the workfare requirements: ∂S(a)/∂P(au) is negative in the model, meaning that the effect on the job search effort of people in activation given a marginal increase in the workfare intensity is the opposite of the effect such a marginal increase would have on the job search effort of people who are unemployed (and not in activation).

Given the model specification, people in activation spend more time on the work requirement and job search combined than unemployed people spend on job search, but which group actually spends more time searching is ambiguous. Searching is of course only half of the story as there also needs to be some jobs that people who search can find. Unemployed search for jobs, firms have vacancies where people can get employed and the unsurprising equilibrium conditions are briefly outlined. The job finding rate (α) of people searching is decreasing in the wage rate. Wage determination takes place according to a Nash bargaining solution where the bargaining power is taken to be exogenous. A key variable when dealing with the matching aspect of the model is the labour market tightness, θ = v/s (where v is number of job vacancies available and s is the effective search volume).

How workfare affects job search incentives is important, but the main interest is of course rather the impact on (un)employment. The main fact to take away from that part of the formal analysis is that ‘things are complicated’. The net effect on the number of people who are unemployed, in activation and in employment from a given policy change “depends on the balance between counteracting effects”. The effective job finding rates (α*s) [job finding rate conditional on search times search volume] of the two main groups (unemployed and activated) are key, and their contribution can be decomposed into an indirect wage effect, which is unambiguously positive and will thus increase the effective job finding rate for both unemployed and activated, and a direct search effect the sign of which depends on the workfare dimensions and the groups in question. This is another reason why a general equilibrium framework is required to fully understand the effects involved (more below); as also mentioned in the previous paper, in analyses which do not include the indirect wage effect workfare elements will generally be perceived of as worse performing than they do in analyses which include the indirect effects.

The effects of a workfare policy change is not surprisingly dependent on the initial level of workfare introduced into the system; it is for example shown that if no workfare elements exists ex ante, a policy maker can decrease total unemployment by introducing workfare elements and holding the benefit level constant. The dynamics of the level-dependencies involved are made more explicit in the model simulations, where one of the conclusions is that at a low intensity of workfare intensity [P(au)] the threat and wage effects dominate (i.e. a marginal increase in the workfare intensity will impact employment positively) whereas when at a higher level the locking-in effect dominates (i.e. a marginal increase in the workfare intensity will impact employment negatively). On the other hand, when it comes to the work requirement [l(a)] total unemployment is unambiguously decreasing in the work requirement. Welfare goes down for all three groups analyzed, including the people in employment, when workfare is increased, although employers benefit from workfare because it impacts their profit share positively. As workfare can be thought of as to some extent effectively introducing slack into the budget constraint of the government, this party is of course another entity which gains from the introduction of the scheme.

The next paper in the series is Short-Run Equilibrium Dynamics of Unemployment, Vacancies, and Real Wages by Christopher Pissarides (who got the Nobel Prize two years ago). I’ve unfortunately not been able to find a non-gated version of this paper online.

September 7, 2012 Posted by | economics, papers | Leave a Comment

## Stuff

i. Population pyramids. Pretty neat. A few examples below. First, the world population pyramid, 2010:

Here’s how it looked like in 1950:

Here’s the population pyramid for Western Africa, 1950:

And here’s how it looks today:

No, I didn’t copy the same image twice. When you’re at the site and click from one version to the other you can spot the difference, but it’s not easy if you’re just comparing the images even if you look carefully. Try to compare that ‘development’ with what happened in Western Europe. First 1950:

Notice the ‘hole’ in the middle? It looks really strange. I wonder what happened 30-35 years before 1950 that might have impacted birth rates so significantly… Here’s how the pyramid looked like in 2010:

The site has more.

ii. The case for personal responsibility?

iii. Vihart has a new cute doodling in math class video up:

iv. I want to play this game at some point (while in the presence of at least one female. Otherwise it’d probably just be weird). Any ideas on how best to implement elo-difference-related handicaps here?

v. I linked to the Vice Guide to North Korea a long time ago. By accident I came across the site again recently, and I liked this video:

vi. The short version of why I may not ‘work blog’ the paper I’m reading right now:

I may decide to blog it anyway and just talk my way around the math, I haven’t decided yet. Much of the stuff the paper covers is also covered to some extent in the paper I linked to earlier today, so that’s certainly a better place to start for people with a time constraint who are curious to know more about these things.

Incidentally while reading the second paper a hidden assumption that had crept into my first work blog post became apparent to me for some reason. I wrote that the article I covered was “an overview article that can be read by pretty much anyone who understands English”. This is not true and I should have known better. I measured the Gunning fog index of my own post about the article and that came out at about 15,2 or so (‘the index estimates the years of formal education needed to understand the text on a first reading’). Surely the article itself has a lower readability level than my blog post about it.

I know that most of you know this, but maybe it’s worth rehashing even so: I’m not a journalist, and I will generally neither think about nor care about how ‘readable’ my stuff, or the stuff I link to, is. That’s not to say I do not try hard to be very precise when it comes to terminology and choice of words and so on.

vii. This is an awesome video:

The future is now.

September 5, 2012 Posted by | blogging, economics, random stuff | 2 Comments

## Work blogging

I thought I’d try this and see how it goes. When starting a semester there’s always some easy overview stuff that should not cause people outside the field any problems and I thought I’d start with that. The current post will be based on the paper Flexicurity – labour market performance in Denmark by Andersen and Svarer. Monday I printed approximately 40 papers (un)like this to be read during the semester, and I’m not sure I’m going to be blogging all of them but we’ll see how it goes.

The article is as mentioned just an overview article that can be read by pretty much anyone who understands English. It’s not hard, it’s just stuff I need to know. I filled one A4 paper with notes related to the paper and my blogpost will be based on those notes, rather than the text; I assume this approach will be useful in terms of preparing for the exam because at that point I will not have time to reread the paper. Some of my remarks may not be from the paper but instead related to stuff covered during the first lecture.

First off they talk a bit about the (‘Danish’) flexicurity model, which is based on a combination of a relatively flexible labour market and relatively high social transfers providing social insurance. This model has often been argued to be a major factor behind the Danish (and in other contexts, Scandinavian) economic performance. In the paper they argue that the flexicurity model has been ‘around’ to a significant extent since the 70′es and given the economic performance of Denmark in the 70′es and 80′es the flexicurity model is probably ‘not the whole story’. They argue in the paper that a third factor, active labour market policies, has been crucial for the relative success of the model.

They mention and talk a bit about – and I believe they also misspell – the Ghent system (Gent in the text), which relates to how the Danish UI (unemployment insurance) benefit payments scheme works. Other noteworthy features (in this context) of the Danish labour market: Many small firms, and people who are temporarily laid off constitute a substantial number of the unemployed at any given point in time.

They talk a bit about EPL [employment protection legislation] and argue that there’s a distinction to be made between ‘job security’ (strict EPL) and ‘employment security’ (lax EPL). Denmark has relatively lax EPL.

Denmark has a high replacement rate (UI benefits are relatively high compared to wages of people in employment) especially for low-wage workers. So low wage-workers generally confront the highest marginal tax rates related to the state transition from unemployment to employment.

Reforms in the 90′es had three main effects: i) Shorter duration of benefits, ii) changed rules regarding eligibility – getting a job basically became a requirement for ‘resetting the clock’ regarding benefits; it was/is no longer enough to participate in a job training program, ii) workfare. Youth unemployment was dealt with by lowering benefits (to the level of study grants) for young people and by implementing stricter activation requirements for this population segment. Wage formation has become less centralized over time.

Activation measures generally last about 6 months. Workfare affects both employed and unemployed people. Unemployed people in the active labour market programmes are subject to a lock-in effect which means that the activation requirement may crowd out job search. They are also subject to a positive effect, the post-programme effect, which deals with the fact that an activation programme may increase human capital. (though it’s worth noting here that even if human capital goes up, job search efforts may still be impacted negatively by the programme, e.g. by more narrow job-search post-activation). Unemployed people who are not in an activation programme may increase search efforts prior to being faced with activation measures, as activation measures are generally unenjoyable (in the literature they are often modelled as a tax on leisure). This threat/motivation effect has been shown in a Danish context to be both real and significant. People who are employed are also impacted by the workfare requirements of people who are unemployed, because they make the outside option (other jobs) less attractive, which means that wage demands of people employed will be impacted by the policies. This is because from the point of view of a person who’s already employed workfare can be considered a tax on job searching and/or an increase in search costs. They argue in the paper that active labour market policies have impacted wage formation in Denmark during the 90′es. The wage effect is an indirect effect which is hard to observe and it illustrates how a general equilibrium framework is necessary to evaluate costs and benefits of labour market policies.

The time profile of the UI scheme has changed since the reforms were first implemented, as compensation is now falling with the duration of unemployment (in the 80′es it basically wasn’t). In a long time this fall was caused by both the jump from UI-benefits to kontanthjælp after the UI-benefits had been exhausted and by the implicit tax on leisure which hit unemployed people who had received UI benefits for some time and thus became subject to workfare requirements. Today unemployed people face workfare requirements from day one, but as the UI benefits duration has shortened even further (to 2 years in 2010, not in text) the time profile aspects of the system are still very important.

Noteworthy is the fact that workfare requirements introduce a screening element to the benefits system, as benefits are arguably better targeted to people who ‘really need them’ (and thus are willing to be subject to the workfare requirements). Also noteworthy is that how one perceives workfare requirements can impact the effects they can be expected to have; for example, one might choose to perceive of workfare requirements as ‘an option to prolong the benefits period’ rather than a ‘condition to get benefits’, and a recipient of UI-benefits might start to ‘think of workfare as a job option’ – such perceptions would be expected to cause workfare to crowd out job search.

Workfare seems to be popular politically, compared with lowering benefits. Most voters care more about income distributions which can be measured than utility functions which can’t.

Empirically, the lock-in effect is more significant  in the short run than the post-programme effect, and (as already mentioned?) the threat effect is real and significant. The wage effect is hard to measure but given current estimates it’s probably quite significant. In the long run the post-programme effect is likely to be larger than in the short run; this again relates to the extent of hysteresis/state dependence. When evaluating the costs and benefits of workfare, it’s important to deal with this aspect. In general, the reforms of the 90′es have improved cost effectiveness, but this is still an issue. Denmark is in the absolute top of most measures of spending on active labour market policies.

Sanctions, which are imposed on people who are subject to workfare requirements but do not meet the requirements, have increased over time. Arguably males are more responsive to sanctions than females. Workfare may be improved through better targeting of programmes; for example supplementary education, the most common activation measure, is more likely to be cost-effective for people with low education than for people with a high education. In the public sector the use of matching groups have been implemented to improve the efficiency of the programmes.

Any kind of feedback is most welcome.

September 5, 2012 Posted by | economics, papers | 5 Comments

## Having fun

(Click to view full size:)

I spent most of the day doing exercises. 10 hours or so. Then an hour’s worth of reading on the side. I think perhaps I’d have found this stuff interesting 4-5 years ago.

Imagine how much fun it is to spend your Saturday doing this stuff while feeling guilty about not doing even more of it, even though you pretty much hate every second of your life you spend on it, all the while feeling that it’s futile anyway because you’ll probably just fail.

The funny thing is that if you add the total number of hours I’ve spent on this course (combined, remember that I’m retaking it this semester), I doubt anyone who got less than an A would be even close to that total time expenditure. I’ll consider myself very lucky if I get a C. I think he failed something like one-fourth/one-third of the class at the original exam in January.

Don’t expect answers to this post, I’ve been offline all day and I’m not sure I’ll go online again before the exam.

February 18, 2012 Posted by | academia, economics, education, personal | Leave a Comment

## Data on Danish immigrants, 2011 (4)

Before I started out this post I thought it would be the last one in the series, but at the end of the day I decided to wait with the crime data until later. This part will mostly deal with public expenditures and stuff like that. Here’s a link to the previous post in the series.

*While non-Western immigrants make out 6% of the population at the age of 16-64, they make up 10% of all people in Denmark who derive their main income from government transfers (…’are provided for by the government’ is perhaps a more ‘direct’ translation. The Danish term used in the report is: ‘er på offentlig forsørgelse’). In this framework, the concept of government transfers includes various direct income transfer programs like unemployment benefits (kontanthjælp, dagpenge), and early retirement programmes (efterløn, førtidspension), as well as governmentally subsidized employment programs (ansættelse med løntilskud, fleksjob). People working for the government are not included. (p.87-88) The ‘% of X who are provided for by the government’-measure is not the ratio of people in the sample who have received the various transfers included in the measure over the course of a year, it is rather based on a sum of all the people who have over various points in time during the year been receiving these transfers. If you have a group of one hundred people and twelve of them each received a transfer for one month during that year, that would translate to 1% of that population being provided for by the government; it’s a rough measure of the amount of ‘full-time recipients’ and should be interpreted as such. For people who receive early retirement transfers from the government the overlap between the total number of recipients over the course of a year and the number of ‘full-time recipients’ is naturally much larger than it is when it comes to transfers like unemployment benefits. (pp.87,104)

*In Denmark, two of the main social assistance programs for people who are in the workforce are ‘kontanthjælp’ and ‘dagpenge’. Kontanthjælp is the basic income support system for people without any kind of supplemental job insurance, and you can only receive it when you’ve basically depleted your assets – if you have liquid assets worth more than ~\$2.000 (Danish link), you do not have the right to receive this transfer. In this context, a car you might need to drive to work is considered a liquid asset. Dagpenge is a more generous job insurance scheme subsidized by the government; the transfer payments are higher and they are completely independent of personal wealth. Approximately one in 4 (24%) of all people who receive kontanthjælp are non-Western immigrants. (p.87) 7% of all non-Western immigrants at the age of 16-64 receive kontanthjælp, whereas the corresponding number for people of Danish origin is 1,5%. (p.91)

*As the employment rates of non-Western immigrants are lower than the employment rates of people of Danish origin, it makes sense that they are also more likely to be provided for by the government. 38% of non-Western immigrants are provided for by the government, whereas the corresponding numbers for people of Danish origin and Western immigrants are 24% and 16%. (p.87)

*More than half of Lebanese-, Iraqi-, and Somali immigrants are provided for by the government. And more than half of all women from Lebanon, Somalia, Jugoslavia, Iraq and Turkey are provided for by the government. (p.87)

*Middle aged immigrants in particular have much lower employment rates than people of Danish origin at the same age, and they are thus much more likely to be provided for by the government. 60% of male non-Western immigrants at the age of 50-59 and 61% of female non-Western immigrants at the age of 50-59 are provided for by the government. The corresponding numbers for males and females of Danish origin are 23% and 26%. (p.87)

*The country of origin is an important variable when considering the likelihood that an individual immigrant is provided for by the government. 20,7% of all males of Danish origin at the age of 16-64 were provided for by the government in 2010. For Western immigrants combined it was 13,9% of males at the age of 16-64 who were provided for by the government, and for non-Western immigrants combined it was 36,7% of males at the age of 16-64 who were provided for by the government. Some more detailed numbers for male Western and non-Western immigrant populations – first the Western countries: Sweden (19,3%), Germany (18,6%), Great Britain (18,0%), Iceland (16,8%), Italy (15,7%), Norway (14,9%), Poland (12,9%), USA (11,0%), Netherlands (10,1%), France (8,8%), Romania (8,0%), and Lithuania (3,3%). The corresponding numbers for non-Western countries: Lebanon (57,8%), Iraq (51,5%), Somalia (50,1%), Bosnia-Hercegovina (45,6%), Ex Yugoslavia (44,4%), Iran (44,1%), Morocco (41,7%), Sri Lanka (37,3%), Turkey (37,0%), Afghanistan (35,1%), Vietnam (31,4%), Pakistan (29,5%), Russia (20,4%), Thailand (16,5%), Philippines (14,8%), India (9,7%), China (7,8%), and Ukraine (2%). (p.94)

*The female numbers are generally higher. I shall have to make a small digression here before I deal with those numbers: When the Danish Welfare Commission (Velfærdskommissionen) analyzed the distributionary features of the the Danish welfare system considering the gender variable, they found (Danish link) that females were on average net benefactors and males on average net contributors over an entire life span – a newborn male could, given current policies at the time the report was made, expect to pay in 0,8 million kroner (\$150k) more than he’d receive over his lifespan, whereas a newborn female at that time could expect to receive 2,4 million kroner (\$435k) more from the government than she’d contribute in taxes ect. Danes who are interested can read chapter 3 of this report – unfortunately I do not think an English version of that report exists. It’s likely that the relative contribution rates have changed somewhat by now, but it would surprise me a lot if they are much different now, as most of the reasons for these distributional consequenses of the welfare system have not changed much.

*Either way, as mentioned above when it comes to the females the numbers are generally higher for all groups. Of the females of Danish origin at the age of 16-64, 26,3% of them were supported by the government in 2010. For female immigrants from Western countries, the corresponding number was 18,9% and for non-Western female immigrants the number was 39,1%. Below some country-specific data – first Western countries: Sweden (24,3%), Poland (24,0%), Norway (23,5%), Great Britain (21,0%), Iceland (20,8%), Germany (18,7%), Romania (15,4%), Netherlands (14,2%), USA (12,4%), France (11,6%), Lithuania (11,5%), and Italy (11,3%). Non-Western countries: Lebanon (66,2%), Somalia (55,6%) Ex Yugoslavia (54,9%), Iraq (53,6%), Turkey (51,3%), Bosnia-Herzegovina (49,9%), Morocco (49,4%), Pakistan (45,1%), Iran (42,8%), Afghanistan (41,7%), Sri Lanka (41,6%), Vietnam (39,2%), Thailand (23,0%), Russia (20,9%), India (18,6%), China (13,9%), Ukraine (12,5%), and Philippines (11,7%). (p.95)

*The report doesn’t talk about the data much, but when analyzing the numbers above there are a couple of observations worth making here. The first is that the Swedish numbers are problematic to compare with the rest of the Western countries – it is quite likely that part of the reason why the Swedish numbers are high is that many of the ‘Swedish immigrants’ Denmark receive are in reality immigrants from non-Western countries who have used Sweden as a stepping-stone to enter Denmark, because Swedish immigration laws are much more lax than are the Danish, and it is much easier to enter Denmark via Sweden than, say, via Somalia. One other thing to note here is that the non-Western countries with high dependency rates are almost exclusively countries with large muslim populations. The non-Western immigrants from Thailand, China, Russia, India, and Ukraine in fact all ‘do better’, some of them much better, than people of Danish origin – and most of these populations are perfectly comparable to the immigrant populations from Western countries.

*Calculating net contribution rates is beyond the scope of a report like this, but I thought it would be worth including a few numbers from the publications of the Danish Welfare Commission (Velfærdskommissionen, also mentioned above). The short version is this (pp.121-122):

The graphs display the calculated net contribution to the government finances of males (the first one) and females (the second one) depending on age given the policies that were in effect at that point in time. The calculations are based on the Danish DREAM model.
Green = Danish origin.
Dark blue = immigrants from ‘developed countries’ (direct translation: ‘more developed countries’).
Turquoise = descendants of immigrants from -ll-.
Red = immigrants from ‘lesser-developed countries’.
Grey = descendants of -ll-.

They calculate in the report (p.123) that when looking at the financial net contributions to the government over the lifespan of an individual the estimated net present value (…NPV) of a male immigrant from a lesser-developed country is -0,28 mio. kroner (\$50k), whereas the NPV of a female immigrant from a lesser-developed country is -4,4 mio. kroner (\$800k). The NPV of a new-born male descendant of an immigrant from a lesser developed country is -0,17 mio. kroner (\$30.000), and the NPV of a new-born female descendant of an immigrant from a lesser-developed country is -3,13 mio. kroner (\$570k). The NPVs of immigrants from more-developed countries are 3,04 mio. kroner/\$553k (males) and -0,65 mio. kroner/-\$118k (females). The estimates are from 2004 and they are sensitive to changes in policy, but not that sensitive.

*Off topic, but I thought I should mention it anyway: The Florida Birth Defects Registry in 1999 estimated the lifetime costs for a child with Down Syndrome to be nearly \$500,000. A Danish estimate would be much higher, but note that this cost estimate is significantly lower than the cost estimate of an average female immigrant from a lesser-developed country. In the 90es it was despite this not uncommon in Denmark to see political arguments to the effect that we needed to import immigrants from the Third World in order to save the Danish welfare state from economic ruin in the long run.

*Anyway, they remark in the Welfare Commission report that:

‘The negative contributions pr. person for immigrants and descendants from lesser-developed countries have a significant effect on the total future public-sector budget-balance problem, because both these groups are growing fast. In 2003 these two groups made up 4,7 % of the population, whereas they in 2040 are expected to make up 11,8% of the population, if the present (low) level of immigration is unchanged.’

(“De negative bidrag pr. person for indvandrere og efterkommere fra mindre udviklede lande har en betydelig effekt på det samlede fremtidige finansieringsproblem for den offentlige sektor, fordi begge disse grupper vokser med betydelig hast. I 2003 udgjorde de to grupper tilsammen 4,7 pct. af befolkningen, mens de i 2040 forventes at udgøre 11,8 pct. af befolkningen, hvis den nuværende (lave) indvandring fastholdes.” – p.125)

*As mentioned before, the overlap between the number of people who are in fact full-time recipients of a given public transfer payment and the number of people who have received a certain type of transfer payment only during a short time period over the course of the year depends on the nature of the transfer. A way to measure the average duration people receive a certain type of transfer is to divide the number of calculated full-time recipients with the number of people who have at some point during the year received the transfer. Immigrants from non-Western countries who receive temporary transfers on average receive those transfers for a longer period of time than do people of Danish origin or immigrants from Western countries and this is particularly the case when it comes to kontanthjælp: Non-Western immigrants who receive kontanthjælp on average receive it for 52% of the year, whereas the corresponding number for people of Danish origin is 40% – which is again significantly higher than the number for Western immigrants, which is 31-32% (judging from the graph on page 104; no numbers are given in the text).

January 31, 2012 Posted by | data, demographics, economics, immigration | Leave a Comment

## Data on Danish immigrants, 2011 (3)

The third post in the series, here are the first two posts. This part will deal with education and I must admit that it’s less data-heavy than the previous two posts, in part because I felt it was necessary to spend some time explaining how the Danish education system actually works here (and in part because I feel there’s a limit as to how much time I can justify spending on posts like these). I’ll do another post on crime later on, so this is not the last post in the series. Anyway, here goes:

*In 2010, 44% of male descendants of non-Western immigrants and 61% of female descendants of non-Western immigrants in Denmark at the age of 30 had finished an education leading to a vocational/professional qualification (see below for some notes on terminology). The corresponding numbers for people of Danish origin at the age of 30 were 73% and 79%. The education level of non-Western female descendants has increased over time; in 2004 the number was 44%. (p.65)

*It was a bit harder to translate stuff from this section than the rest because the Danish education system is a bit different from that of e.g. the US, creating a few problems related to terminology. The terminology I’ve used in this section when I was in doubt follows this source. So, which educations are in fact included in the ‘education leading to a …’ (abbreviated ELVQs in the following) measure above and which are not? ELVQs include (Danish link) various technical educations (electrician, carpenter,…), further education leading to a degree (BA, MA, PhD) as well as various other educations (office education, teaching, nursing,…). A high school degree is not included in the set, nor is a grundskoleuddannelse (see below), and if you’re a college drop-out who have not obtained a degree you’re also not included in the set of people with an ELVQ. The idea is of course that if you have an ELVQ, you have finished an education that has given you some specific skills that are useful in terms of finding and retaining employment. I decided this would also be as good a place as any to add a bit more background info about the Danish education system you might need to make sense of the numbers in the report – it’s not in there, so no page references. In Denmark the lowest attainable ‘formal education level’ (i.e. disregarding drop-outs before that point) you can have is completion of the 9th grade (grundskoleuddannelse). The graduation exam is called ‘Folkeskolens afgangsprøve’. Technically it’s a little complicated as to where exactly to put high school in terms of grades, because some people finish 9th grade and then go to high school directly (I did) whereas others take 10th grade first at the same place they took 1st-9th grade before they go to high school. The coursework in Danish high schools is the same for people who went to 10th grade before going to HS and for people who didn’t, and HS classes are a mix of both types of students. I’m not completely sure if you’re required to take 10th grade before you can enroll in a vocational(/technical) education like carpentry, but I think some of them do demand that you have 10th grade before you can start, or at least that you have taken some of the specific courses (Danish, maths). Adult immigrants without an education can take a ‘basic adult education’ which is supposed to confer the same skills as a traditional grundskoleuddannelse (in a shorter amount of time) – after they have that they can move on to a vocational education or secondary education.

*A Danish ELVQ perhaps needless to say significantly increases employment opportunities. For 30-39 year old male non-Western immigrants who had only a grundskoleuddannelse/basic adult education, the employment rate was 58% in 2010 (females: 45%, p.79). For those with a vocational education, the employment rate was 76% (females: 78%). For those with a medium-cycle higher education (‘mellemlang videregående uddannelse’), the employment rate was 82% (females: 84%). For those with a long cycle higher education (MA or equivalent/higher), the employment rate was 79% (females: 77%). (p.65 unless otherwise specified)

*When you look at the descendants of non-Western immigrants at the age of 30 years, 41% of males and 25% of females have only a grundskoleuddannelse. The corresponding numbers for males and females of Danish origin are 18% and 13%. 22% of male- and 30% of female descendants of non-Western immigrants have a vocational education at the age of 30; the corresponding numbers for people of Danish origin are 40% and 30%. When it comes to medium-cycle higher education, the numbers for non-Western descendants are 6% and 15%; the corresponding numbers of people of Danish origin are 10% and 24%. 10% of male descendants and 8% of female descendants of non-Western immigrants at the age of 30 have a long cycle higher education; 13% of males of Danish origin and 15% of females of Danish origin at that age have one. As mentioned above there’s generally a pronounced gender difference when it comes to the education of non-Western descendants, as 61% of female descendants and 44% of male descendants at the age of 30 have a ELVQ. (p.67)

*I’ll add a couple of cautious remarks here regarding how to interpret the numbers above, cautious remarks which are not included in the report (so no page references): a) There’s probably a significant power issue here when considering forecasting based on these numbers, because the number of non-Western descendants in this age group (30-years-old) is quite low – n=558 (males) and n=559 (females). b) In terms of forecasting, heterogeneity might also be an issue. It matters if you’re looking at descendants born before or after 1983-84, because the composition of new immigrants changed at that point (in the medium run, so did the composition of immigrants in Denmark as a whole). I already talked a bit about related matters in the comment section here. Non-Westerns who came before, say, 1980 mostly came here to work; on the other hand the number of non-Westerns with fugitive status or family reunification status increased dramatically after 1983 due to policy changes implemented at that point. Another dimension along which heterogeneity is relevant is the change in the country profile of descendants, change which is not only driven by a change in the immigration patterns but also related to fertility differences across subpopulations; the total fertility rate of Somali immigrants is almost twice that of Turkish immigrants (86% higher, p.26) and these differences aren’t new. It should perhaps be made clear here that even if the change in the composition of non-Western descendants in the past might have had adverse effects on some human capital measures (SES of parents, IQ…) of the descendant group ‘as a whole’, it’s far from certain that this will lead to lower educational outcomes of the group in the future – for example, political commitment to improve educational outcomes of these groups might more than make up for the other effects. From 2004 to 2011 the educational outcomes of non-Western descendants improved, but there were only 72 non-Western descendants altogether in 2004 so it’s hard to draw strong conclusions from this as we once again run into the power issue.

*One way to try to draw inferences about the future educational profiles is to look at the educational profile of descendants currently aged 20-30 years old and compare them with the historical educational profiles of the 1980-generation (the current 30-year-olds). This is done below, the first graph contains data for the current 20-30 year-olds, the second contains data for the current 30-year-olds, green = females, blue = males – the lower ones are for non-Westerns, the graphs show how big a percentage of the group had obtained an ELVQ at any given age between 20 and 30. For example, 40% of non-Western males have an ELVQ at the age of 28 (and this was also the case for the 1980-generation):

*Part of the reason why I’ve focused mostly on descendants is that it is very hard to figure out the education levels of (first-generation) immigrants, because the data the authors made use of includes only educations which are completed at Danish educational institutions. In other words, both an Italian nuclear physicist educated in Rome and a poor Sudanese woman without a primary school education will have an ‘unknown’ education level (uoplyst) in these data sets, making it harder to pinpoint just exactly what is going on. A big majority of immigrants do not have a Danish education – 77% of Western and 69% of non-Western immigrants do not have a Danish education. (p.80) However, it seems relatively clear that at least when dealing with non-Western immigrants, an ‘unknown’ education level probably most often translates to a ‘low education level’ – the employment rate of non-Western female immigrants with an unknown education level is just 33% (p80).

January 25, 2012 Posted by | data, denmark, economics, education, immigration | Leave a Comment

## Everyone has a price, but there’s a limit? What will people (not) do for money?

“Questions: Would most people you know kill their favorite pet for \$1 million? What about you?
Answers: Most people: Yes (23%) No (72%);
Yourself: Yes (11%) No (83%).”

A recent Vanity Fair poll, via Robin Hanson (whom I no longer read on a regular basis, but still visit once in a while). Hanson claims that you’d take the million. The survey and the responses made me start thinking about what people will actually do for money, what they won’t and which variables impact that decision process. Some general remarks:

i. Financial vulnerability/poverty lowers ‘your price’ and increases the choice set of stuff you’d do to get money.

ii. ‘Status effects’ matter – Hanson of course covers this. A few remarks: People usually know what ‘the right answer’ to these types of questions is supposed to be, and the more costly it seems to ‘do the right thing’, the higher the status value of professing that specific belief. It’s a bit like when dealing with religious tribes; the more crazy the idea is, the more credible the signal. This observation also in my mind leads to a related hypothesis: To make it more costly (in terms of time, effort, money) to ‘do the right thing’ in the hypothetical does not necessarily make it any less likely that people will ‘take the money’ – actually it can have the opposite effect, because the value of the signal goes up as well; perhaps the value of the signal increases even faster than the hypothetical costs, especially above a certain threshold where people decide that their choices will have no real-world consequenses. Paradoxically, by making one of the options so attractive as to be borderline absurd you can end up making sure that a lot of people will give you the opposite answer – i.e. ‘the perceived right answer’.

iii. Framing effects matter. Framing effects persist when people deal with real money in real-world settings, rather than hypothetical questions with no real-world consequences, but people usually act more rationally when they have more ‘skin in the game’. This, I think, lends support to the hypotheses that people will both a) treat the two scenarios – i. the hypothetical case, ii. the actual situation – as completely different in their minds given aforementioned threshold effects, and b) be more subject to framing effects (i.e. be less ‘rational’) in the hypothetical case. Unless you show up with a million dollars and an axe to kill the dog, the people you ask will only ever deal with the first scenario and those answers will not give much insight into what people would actually do if you came around with a check and an axe.

iv. Related to i., but still worth mentioning: There are likely threshold effects at work when dealing with choice set limitation. Poor people will be more likely to do some act X for a given amount of money Y than rich people will – but maybe it’s also the case that given some income level Z, some options simply go off the table altogether, given any price. Would a parent of three kill all their children for X dollars? This is probably where stuff like Maslow’s hierachy of needs and similar stuff from psychology come into play. Money is a claim on ressources. Still, people probably underestimate how important such claims on ressources can become.

v. Related to the last part of iv. above, correspondence bias probably play a role here when it comes to how people answer and how the hypothetical choice set limitation looks like. If correspondence bias is important, it’s probably safe to say that people who’ve answered the question as if they considered it (subconsciously, perhaps) a test of their support of the tribe/allegiance/trust will be unlikely to accept the idea that they’d act perhaps even radically differently in the real-world-scenario.

vi. “The report titled “The Big Payoff: Educational Attainment and Synthetic Estimates of Work-Life Earnings” [...] reveals that over an adult’s working life, high school graduates can expect, on average, to earn \$1.2 million; those with a bachelor’s degree, \$2.1 million; and people with a master’s degree, \$2.5 million.

Persons with doctoral degrees earn an average of \$3.4 million during their working life, while those with professional degrees do best at \$4.4 million.” (link)

A third way to frame the question: You’re an average Joe with a master’s degree. You’re 25 and currently expect to work another 40 years on the labour market before you retire. If you choose to kill your dog today, you get 16 years of income tomorrow. You’d be able to retire at the age of 49, instead of at the age of 65 (this is disregarding discounting, compounded interest ect.; so the ‘subjective true value’ of that money will likely be even higher than that). Next, repeat the question using the high school grad numbers. A million dollars is a lot of money and it can buy you a lot of stuff.

I assume most readers of this blog would assume that they’d take the money in a real-world setting (though it’s impossible to be sure ‘unless [someone] show[s] up with a million dollars and an axe to kill the dog…’). If you think you wouldn’t take the money in the real-world scenario, please comment below!

Appendix (added after swissecon’s comment):

A factor I didn’t include above is the ‘love of the pet’ variable. This one is a double-edged sword as well because there are relevant tradeoffs here too: The longer you’ve had the pet, the greater attachment you’ll feel towards it (ceteris paribus), but also the less time the pet has left of its life. All pets die, and if you’ve had your dog for a decade even though you love it very much you’ll know that it probably doesn’t have a lot of years left. The pet’s life has to end in a few years anyway. Lots of people who have pets that they love end the life of the pet before nature would by paying a vet to kill the pet, to ease the suffering of the pet. I’m not saying it’s an easy decision to make, I know it’s not, but lots of people do it all the time. How hard would it be to push that decision, say, 2 years ahead and get paid a million dollars to do it? 3 years? These aren’t questions I just bring up to make people uncomfortable – the point is that questions like these will be perfectly natural to ask yourself if the guy was actually standing in your yard with that 1 million dollar check and an axe. And it’s because of questions like those that I think people are lying to themselves if they claim that they’re relatively certain they would never kill the pet.

There are cases where the love will be very strong, like an 80-year-old with a 13 year old cat. But the combination of advanced age of both the pet and the pet-owner is not exactly the default situation when dealing with pets and pet-owners. Another important factor at play in that situation is also that an 80-year-old will have a lot less use of the money, because a lot of spending options available to young people are no longer available to her or him.

January 11, 2012 Posted by | bias, economics, Psychology, random stuff | 3 Comments

## China’s marriage market

I decided to start out with this:

…in order to illustrate that you could probably write a not too dissimilar post about other countries as well. Also, it’s a nice image. Image credit: Wikipedia. “Description: Sex ratio total population. Pink = Female higher than male, Green = Equal, Blue = Male higher than female.”

This post will only deal with China. Here’s some related stuff about India.

So anyway, I was skimming a few world bank working papers and I found this one (pdf), which I decided to cover in a bit of detail here. It’s called China’s Marriage Market and Upcoming Challenges for Elderly Men and it’s written by Monica Das Gupta, Avraham Ebenstein & Ethan Jennings Sharygin. Some stuff from the paper:

“The Chinese census in 2005 reflected a staggering sex ratio at birth of 119, implying that each year there are roughly 1 million more boys born than girls.3 For cohorts born between 1985 and 2005, we estimate that there are 27 million more men than women4, implying a large number of men will fail to marry. [...]

We demonstrate two key facts regarding the Chinese marriage market using historical census microdata from 1990 and 2000. First, economic status is a crucial predictor of marital probability for men in China. We use years of education as the closest proxy for status, and document that while there is almost universal marriage for highly educated men, lower rates of marriage prevail among men of lower education. By contrast, the marriage market for women cleared: women across the educational distribution enjoy nearly universal marriage, and are able to engage in hypergamy, choosing spouses of higher status and income. Second, since many women migrate for the purpose of marriage, it seems very likely that in the coming decades the collapse of marital prospects for men will occur in poor areas of the country with low educational attainment. [...]

The results paint a grim picture for China’s ability to care for these men under the current policy structure of social assistance and social insurance programs that are primarily locally funded (Wang 2006, World Bank 2009). We estimate that in the absence of major redistribution of education and employment opportunities across China, the marriage squeeze will be in China’s poorer regions with large minority populations.7 Thus it will not necessarily be the more prosperous eastern regions of China with the most skewed sex ratio at birth that will experience high marriage failure rates among men. Rather, the poorer provinces ─ with more balanced sex ratios at birth ─ will bear a disproportionate share of the social and economic burden of China’s unmarried and childless men.”

How big is the difference in marriage rates between the successful males and the not quite so successful males, I hear you ask? Well, the paper states that: “over 98% of college graduates successfully marry by age 35 whereas the proportion is under 90% for men with less than a primary education.” One way to look at those numbers is that ‘that’s actually not that big of a difference’ – it’s around 9 out of 10 or more in both cases, right? But who are we actually comparing again? – another way to look at that is that males with less than a primary education are more than 5 times as likely to not succesfully marry by age 35. To me, that sounds like a huge difference, and it’s expected to get even worse over time: “over 10 percent of men with less than primary school education aged 30+ in 2030 are projected never to marry, and this figure increases to almost half in 2050″. Of course one might argue that economic growth increases mobility (so that even poor men might be able to move to find females willing to marry them) and ‘historical data are historical data’ which perhaps shouldn’t be given as much weight, given how much Chinese society has changed over the past decades. But rural China is still very poor and it isn’t growing very much compared to the rest – many of the people who have not left already for the urban provinces are people who can’t afford to, and they can’t really afford to save either so there’s not in my mind any compelling reason to think they will be able to afford to move in the future. Incidentally, it’s not really that hard to set up a model where you have decreased mobility over time even though the poor group has a positive net savings rate. Property prices are functions of local economic conditions, and if an area experiences significant income growth whereas another area does not and the people living in the poorer area are neither able to save enough money over time to at least keep up with the income growth of the richer area nor can afford to move there in the short run, the relative property price differential and the costs of moving will go up over time, even though the poor single guy might have a significant positive net savings rate. A very simplified model illustrating this could go along these lines:

Average income of ‘poor area’ residents: 10.
Average income of ‘rich area’ residents: 100.
Poor area income growth rate: 0%.
Rich area income growth rate: 10%

I shall assume that income growth rates and housing price growth rates are identical. In reality, housing prices are probably growing faster than income for the relevant demographic in the rich area and slower than income in the poor area. Let’s say the poor guy saves 20% of his income/year, i.e. 2 mu (‘monetary units’)/period. Say he invests that money in the rich area, earning 10%/year. After 10 years, he’ll have saved ~35 mu. How much will a house in the rich area that used to cost 100 mu cost after 10 years? 259. At the beginning, the poor guy was 98 mu short of being able to buy a house in the rich area – after ten years he’s now more than 200 mu short, even though he had a very high savings rate given his income and even though he earned a quite nice return on investment during that period. The property price differential was 90 mu to begin with, it’s 249 mu after 10 years. Maybe the effect sizes won’t be as large as assumed in the paper, but some of the dynamics described in the paper will probably play out to some degree.

Some more numbers and stuff related to these remarks from the paper:

“Poverty in China is heavily concentrated in the rural areas. Different measures of poverty all paint the same picture: while nearly 30 percent of the rural population was poor in 2005, this applied to only 5 percent or less of the urban population [...] The vast majority of the poor in 2003 lived in rural areas, and poverty is most heavily concentrated in the northwestern and southwestern regions [...] Both rural and urban incomes have continued to grow, but the rural-urban gap has continued to widen [...]

Significant proportions of urban workers are covered by formal social insurance programs: in 2007, around half of workers had pension coverage, 45 percent had Basic Medical Insurance, and 40 percent had unemployment insurance and work injury insurance [...] The rural pension system (funded mainly by personal contributions and collective subsidies) covered only about 10-11% of the rural labor force (World Bank 2009: Table 6.65), and coverage of the farm-based elderly population appeared to be particularly limited. Beneficiaries were highly concentrated in a few (mostly wealthy) provinces. [...]

Since men who are not as educated, healthy, and able to earn well tend to fail to attract a bride, they are likely to be heavily represented among those who are unable to save adequately for their old age, or labor heavily into their old age. They are the most vulnerable to income and illness shocks, since they cannot smooth fluctuations in household income by pooling earnings from spouses or children. Unmarried individuals are also more likely to be living without family to serve as caregivers (Table 5). For example, in the 2000 census, 65% of those aged 65-80 who had ever-married were co-residing with younger kin, compared with only 20% of those never-married. Moreover, levels of co-residence have dropped sharply in recent decades (Table 5), and this trend can be expected to continue. The men who fail to marry are among the least likely to be able to save for their old age, to work in their old age, and to have access to old age support from family members.”

Last, a few tables (click to view full size):

Wu Bao, Di Bao and Tekun Hu are various social assistance programs: “The Te Kun program provides cash assistance to very poor and incapacitated residents of less-developed areas, at the discretion of the local officials. The Wu Bao program, dating from the 1950s, sought to ensure that no section of the population remained destitute.11 In 2006, the State Council issued regulations that shift financing responsibility for wubao from village reserves to local fiscal budgets (World Bank 2008:79-80). The Di Bao program, also known as the Minimum Living Standard Scheme, provides subsidies and in-kind transfers to those living below a certain poverty line.”

More than 45 % of the total income of Chinese urban residents above the age of 60 comes from pensions; the number for rural residents in the same age group is about one-tenth of that, 4.6 %. Also take note of the family support numbers.

November 26, 2011 Posted by | china, data, demographics, economics | Leave a Comment

## Some remarks on debt

Maybe I’ve blogged some of this before (in the comments?) but I couldn’t find the stuff in the archives, so I decided to write this post either way. The post contains a few graphs, click on them to view them in full size.

First, some stuff from Martin Paldam’s Development and foreign debt: The stylized facts 1970-2006 (link). This is about the debt of developing countries. From the abstract:

“The paper uses the data from the incomplete debt cycle for the LDC world from 1970 onwards to tell the typical story of debt. Two debt stories are contrasted: A good debt story: Here countries borrow and invest wisely, so that they grow more. A bad debt story: Here countries borrow when they are in crisis, and the debt grows and generates low growth in the next couple of decades. The analysis concentrates on two relations: (R1) the relation between borrowing and growth, and (R2) the relation between initial debt and growth. Both relations are negative, so essentially the stylized story of debt is a story of bad debt.”

Here’s a figure, each group consists of 15 countries:

The debt of Group 1 didn’t get paid off, in case you were in doubt. They got debt relief and debt ‘restructuring’ (/’managed default’). Over time debt service went down, not up. Here’s a bit on the political economy of the debt accumulation:

“One may also ask the simple question: Why does a country borrow when it has a crisis? Is it to adjust quicker to the crisis or to be able to finance non-adjustment? Our results certainly suggest that the latter possibility dominates the picture.

The analysis has showed that debt accumulation is normally associated with some underlying problem leading to economic crises. Somehow things are going badly, and the political system is unable to handle the crisis. A foreign loan provides some wiggle room, and this is surely used to solve the most pressing problem. The reader may then ask what decision makers are most likely to take this problem to be. Think of the choice between a political stabilization and a balance-of-payments stabilization.

A political stabilization means that the popularity/support of the government is increased. This can be done either by satisfying the demands of the voters or by paying off some pressure group, such as the military, the unions etc. In both cases it costs money. Here the foreign loan comes in handy. It appears that such solutions are of a short-run character.

A balance-of-payments stabilization inevitably means that domestic absorption has to be reduced. It is obvious that this is painful and likely to cost the government some support, thus it is almost the reverse of a political stabilization. Hence, it is likely that the government may fully or partly shy away from solving the balance-of-payments crisis.”

Next, some stuff from The future of public debt: prospects and implications, by Cecchetti, Mohanty and Zampolli (link). I’ll quote from it below, but really you should read it all:

“Since the start of the financial crisis, industrial country public debt levels have increased dramatically. And they are set to continue rising for the foreseeable future. A number of countries face the prospect of large and rising future costs related to the ageing of their populations. In this paper, we examine what current fiscal policy and expected future age-related spending imply for the path of debt/GDP ratios over the next several decades. Our projections of public debt ratios lead us to conclude that the path pursued by fiscal authorities in a number of industrial countries is unsustainable. Drastic measures are necessary to check the rapid growth of current and future liabilities of governments and reduce their adverse consequences for long-term growth and monetary stability. [...]

The financial crisis that erupted in mid-2008 led to an explosion of public debt in many advanced economies. Governments were forced to recapitalise banks, take over a large part of the debts of failing financial institutions, and introduce large stimulus programmes to revive demand. According to the OECD, total industrialised country public sector debt is now expected to exceed 100% of GDP in 2011 – something that has never happened before in peacetime.2 As bad as these fiscal problems may appear, relying solely on these official figures is almost certainly very misleading. Rapidly ageing populations present a number of countries with the prospect of enormous future costs that are not wholly recognised in current budget projections. The size of these future obligations is anybody’s guess. As far as we know, there is no definite and comprehensive account of the unfunded, contingent liabilities that governments currently have accumulated.”

“existing studies report that the magnitude of the long-term fiscal imbalance – the present value of unfunded liabilities arising from ageing – is very large. Hauner et al (2007) estimate the change in the primary balance required to equate the net present discounted value of all future revenues and non-interest expenditures to the debt levels prevailing at the end of 2005 for seven major industrial countries (Canada, France, Germany, Italy, Japan, the United Kingdom and the United States). The authors report that in order for these countries to pay off all their financial liabilities, they would require an average improvement in their budget balance excluding interest payments of 4.5% of GDP. For the United States and Japan, the estimate is 6.9% and 6.2%, respectively.

Other estimates are similar in magnitude. For example, Gokhale (2009) presents a measure of the long-term fiscal imbalance faced by 23 industrial countries. His estimates suggest that, for financing future benefits without future tax increases, the United States and major European countries would be required to generate an annual present value surplus of the order of 8–10% of 2005 GDP over the period to 2050.”

You can quibble over the details in the following, and I’m not a big fan of the ‘government total revenue and non-age-related primary spending remain a constant percentage of GDP at the 2011 level’-assumption, because that’s just not going to work out. But then again, that’s part of the whole point of the exercise, realizing that fact. An important point I forgot to include/remark upon in the first version of the post is that a big chunk of the projected deficits below are structural.

“We now turn to a set of 30-year projections for the path of the debt/GDP ratio in a dozen major industrial economies (Austria, France, Germany, Greece, Ireland, Italy, Japan, the Netherlands, Portugal, Spain, the United Kingdom and the United States). We choose a 30-year horizon with a view to capturing the large unfunded liabilities stemming from future age-related expenditure without making overly strong assumptions about the future path of fiscal policy (which is unlikely to be constant). In our baseline case, we assume that government total revenue and non-age-related primary spending remain a constant percentage of GDP at the 2011 level as projected by the OECD. Using the CBO and European Commission projections for age-related spending, we then proceed to generate a path for total primary government spending and the primary balance over the next 30 years.12 Throughout the projection period, the real interest rate that determines the cost of funding is assumed to remain constant at its 1998–2007 average, and potential real GDP growth is set to the OECD-estimated post-crisis rate.13

From this exercise, we are able to come to a number of conclusions. First, in our baseline scenario, conventionally computed deficits will rise precipitously. Unless the stance of fiscal policy changes, or age-related spending is cut, by 2020 the primary deficit/GDP ratio will rise to 13% in Ireland; 8–10% in Japan, Spain, the United Kingdom and the United States; and 3–7% in Austria, Germany, Greece, the Netherlands and Portugal. [remarks about Italy that are actually quite fun to read now...] in the baseline scenario, debt/GDP ratios rise rapidly in the next decade, exceeding 300% of GDP in Japan; 200% in the United Kingdom; and 150% in Belgium, France, Ireland, Greece, Italy and the United States. And, as is clear from the slope of the line, without a change in policy, the path is unstable. This is confirmed by the projected interest rate paths, again in our baseline scenario. Graph 5 shows the fraction absorbed by interest payments in each of these countries. From around 5% today, these numbers rise to over 10% in all cases, and as high as 27% in the United Kingdom.”

This is also part of why I posted a link to Paldam’s paper in this post. Look at the first graph again. Many of these modern, developed countries will end up in the group of basket case countries if the people in charge don’t change their ways.

November 20, 2011 Posted by | data, economics, studies | Leave a Comment

## Zach Weiner does it again

Link.

September 29, 2011 Posted by | Cartoons, economics, philosophy | Leave a Comment

## A gem

I just found it earlier today. So do I link here, here or perhaps here? I don’t know yet, there’s much to explore and I haven’t spent a lot of time there yet. A longish quote from one of the ‘notes’ (which has more..):

““That is, from January 1926 through December 2002, when holding periods were 19 years or longer, the cumulative real return on stocks was never negative…”

How does one engage in extremely long investments? On a time-scale of centuries, investment is a difficult task, especially if one seeks to avoid erosion of returns by the costs of active management.

‘Unit Investment Trust (UIT) is a US investment company offering a fixed (unmanaged) portfolio of securities having a definite life.’

‘A closed-end fund is a collective investment scheme with a limited number of shares’

In long-term investments, one must become concerned about biases in the data used to make decisions. Many of these biases fall under the general rubric of “observer biases” – the canonical example being that stocks look like excellent investments if you only consider America’s stock market, where returns over long periods have been quite good. For example, if you had invested by tracking the major indices any time period from January 1926 through December 2002 and had held onto your investment for at least 19 years, you were guaranteed a positive real return. Of course, the specification of place (America) and time period (before the Depression and after the Internet bubble) should alert us that this guarantee may not hold elsewhere. Had a long-term investor in the middle of the 19th century decided to invest in a large up-and-coming country with a booming economy and strong military (much like the United States has been for much of the 20th century), they would have reaped excellent returns. That is, until the hyperinflation of the Wiemar Republic. Should their returns have survived the inflation and imposition of a new currency, then the destruction of the 3rd Reich would surely have rendered their shares and Reichmarks worthless. Similarly for another up-and-coming nation – Japan. Mention of Russia need not even be made.

Clearly, diversifying among companies in a sector, or even sectors in a national economy is not enough. Disaster can strike an entire nation. Rosy returns for stocks quietly ignore those bloody years in which exchanges plunged thousands of percent in real terms, and whose records burned in the flames of war. Over a timespan of a century, it is impossible to know whether such destruction will be visited on a given country or even whether it will still exist as a unit. How could Germany, the preeminent power on the Continent, with a burgeoning navy rivaling Britain’s, with the famous Prussian military and Junkers, with an effective industrial economy still famed for the quality of its mechanisms, and with a large homogeneous population of hardy people possibly fall so low as to be utterly conquered? And by the United States and others, for that matter? How could Japan, with its fanatical warriors and equally fanatical populace, its massive fleet and some of the best airplanes in the world – a combination that had humbled Russia, that had occupied Korea for nigh on 40 years, which easily set up puppet governments in Manchuria and China when and where it pleased – how could it have been defeated so wretchedly as to see its population literally decimated and its governance wholly supplanted? How could a god be dethroned?

It is perhaps not too much to say that investors in the United States, who say that the Treasury Bond has never failed to be redeemed and that the United States can never fall, are perhaps overconfident in their assessment. Inflation need not be hyper to cause losses. Greater nations have been destroyed quickly. Who remembers the days when the Dutch fought the English and the French to a standstill and ruled over the shipping lanes? Remember that Nineveh is one with the dust.

In short, our data on returns is biased. This bias indicates that stocks and cash are much more risky than most people think, and that this risk inheres in exogenous shocks to economies – it may seem odd to invest globally, in multiple currencies, just to avoid the rare black swans of total war and hyperinflation. But these risks are catastrophic risks. Even one may be too many.

This risk is more general. Governments can die, and so their bonds and other instruments (such as cash) rendered worthless; how many governments have died or defaulted over the last century? Many. The default assumption must be that the governments with good credit, who are not in that number, may simply have been lucky. And luck runs out.”

Here’s another:

“Why IQ doesn’t matter and how points mislead

One common anti-IQ arguments is that IQ does nothing and may be actively harmful past 120 or 130 or so; the statistical evidence is there to support a loss of correlation with success, and commentators can adduce William Sidis if they don’t themselves know any such ‘slackers’, or the Terman report’s similar findings.

This is a reasonable objection. But it is rarely proffered by people really familiar with IQ, who also rarely respond to it. Why? I believe they have an intuitive understanding that IQ is a percentile ranking, not an absolute measurement.

It is plausible that the 20 points separating 100 and 120 represents far more cognitive power and ability than that separating 120 and 140, or 140 and 160. To move from 100 to 120, one must surpass roughly 20% of the population; to move from 120 to 140 requires surpassing a smaller percentage, and 140–160 smaller yet.

Similarly it should make us wonder how much absolute ability is being measured at the upper ranges when we reflect that, while adult IQs are stable over years, they are unstable in the short-term and test results can vary dramatically even if there is no distorting factors like emotional disturbance or varying caffeine consumption.

Another thought: the kids in your local special ed program mentally closer to chimpanzees, or to Albert Einstein/Terence Tao? Pondering all the things we expect even special ed kids to learn (eg. language), I think those kids are closer to Einstein than monkeys.

And if retarded kids are closer to Einstein that the smartest non-human animal, that indicates human intelligence is very ‘narrow’, and that there is a vast spectrum of stupidity stretching below us all the way down to viruses (which only ‘learn’ through evolution).”

Incidentally, the 20 percent number is somewhat off – if you assume IQ is ~N(100,15), which is pretty standard, then by going from 100 to 120 you will pass by ~40 percent of all individuals, not 20. If you don’t have a good sense of the scale here, it’s a useful rule of thumb to know that ~2/3rds of the observations of a normally distributed variable will be within one standard deviation of the mean. When you jump from 120 to 140, you pass 8,7 percent of all humans, assuming ~N(100,15), a much smaller group of people.

But yeah, as to the rest of it, I have always had some problems with figuring out how to interpret IQ differences, in terms of how differences in IQ translates to differences in ‘human computing power’. And reading the above, it makes perfect sense that I’ve had problems with this, because that’s not easy at all. I wasn’t really thinking about the fact that the variable is at least as much about ordering the humans as it is about measuring the size of the CPU. That’s probably in part because I have an IQ much lower than Gwern.

September 20, 2011 Posted by | bias, blogs, economics, random stuff | 4 Comments

## Some numbers

I spent a bit of time at Statistikbanken (Statbank Denmark) yesterday, below are some numbers from it that might be of interest. When you click the link you get to the front page of the site – now, if you look to the right there’s a small Union Jack which says ‘English’ if you hover over it. Click this and you get to the English version of the site. I don’t think all of the stuff at the Danish version of the site has been translated at the English link – but a lot of stuff has, so if you’re a foreigner curious about Denmark and the Danes, go take a look..

i. This part contains data from ‘KRHFU1: Befolkningens højeste fuldførte uddannelse (15-69 år) efter område, herkomst, uddannelse alder og køn’.

In 2010, when looking at the age segment of Danes who were 30-34 years old, 20494 Danish males and 22812 Danish females had as the highest achieved education level completed a ‘long-cycle higher education’ (I think this is the term they use in the English version of the data; in Danish it’s just ‘lang videregående uddannelse’. It corresponds to an education level above BA-level but below PhD-level, i.e. Master’s Degree or equivalent). Notice that more females than males at that age has completed this level of education. This is also true after you correct for the fact that there are more males than females in that age segment of the population; in total, there were 177078 males and 176291 females in that age segment of the Danish population. In terms of percentages of the total population in the specific age segment, 11,6 % of the males and 12,9 % of the females at the age of 30-34 had completed a long-cycle higher education in 2010 – the gender difference is about 10 percent.

Now, a funny thing happens when you compare these numbers to the age segment of Danes at the age of 65-69 (people who’ve just retired). In that sample, 9655 males and 3818 females have a long-cycle higher education – out of 146029
males and 152812 females. In that sample, 6,6 % of the males and just 2,5 % of the females have a long-cycle higher education – males in that age group are more than 2,5 times as likely to have a high education than females.

How does it look when you include the age groups in between those two? Like this:

More females than males get a long education today and it’s been that way for at least 10-15 years.

ii. This part contains data from ‘Folketal pr. 1. januar efter tid, alder og køn’ and ‘KM6: Befolkningen 1 januar efter kommune, køn, alder og folkekirkemedlemsskab’

(red: females, blue: males. The x-axis is age, the y-axis is the percentage of each age group who are members of Folkekirken)

So I took out the number of male and female members of Folkekirken at the ages of 1-80 and divided by the total number of Danes in the specific age-group – this gives a measure of how big a percentage of each age group is a member of Folkekirken (Danish National Church). It seems that there are some age cycles here. I did a quick logical test in Excel to get an overview of how the membership rate changes from age group to age group. At the ages of 1-15 years, membership grows ‘every year’ (2-year olds are more likely to be members than 1-year olds, ect.). At the age group of people 18-27 years old, membership drops ‘every year’. Between 30-43 it pretty much grows every year again, then it stabilizes around the new level. For people above the age of 55, it pretty much grows every year again. I decided to not include people above the age of 80 because nothing much of interest happens there; as should be clear from the graph this age segment has by far the highest membership rates and more than 9 out of 10 are members. Remember when interpreting the relatively low membership of children to the left of the graph and the membership growth of the 1-15 years old that part of this is probably because of the relatively higher fertility of Muslim immigrants (as opposed to fewer atheist children).

iii. This part contains data from ‘FAM55N: Husstande pr. 1. januar efter kommune/region, husstandstype og husstandsstørrelse’. Every time some econ blog posts something about the household income development over time (like this one) I also see a commenter asking: ‘but what about household size?’ What I very rarely see is a commenter linking to actual data on household size. This puzzles me every time, because at least in Denmark that kind of data actually isn’t all that hard to get your hands on. Here’s a quick run from Statistikbanken:

I omitted some of the classes because otherwise it quickly gets very messy and they don’t add much to the big picture anyway, this is why the numbers don’t quite add up to the total population – but the table does include far most Danes (the 2011 numbers include 4,92 million people, the 1986 numbers 4,42 million people). The number of single person households with one male or one female living alone has increased somewhat. If you wanted to do it completely right, you’d add all the omitted classes as well before making the calculation, but in terms of the people in the sample (which covers ~ 90% of all Danes) the percentage of people living in single person households went up from 16,2 % to 20,3 %. In terms of the percentage of all households that are single person households, the number is of course much higher. In 1986, 35,6 % of all households (in the sample) were single person households, in 2011 it was 41,5 %. The number has gone up, but less than I’d thought.

I found it interesting that the number of households with a married couple and 3-4 inhabitants altogether (the most likely constellation is a married couple plus 1 or 2 children) has decreased significantly and movement from ‘married couples’ to ‘other couples’ does not explain all of it. Is the driver an increase in the divorce rate or lower fertility rate? I don’t know.

September 14, 2011

## Euler’s formula and Euler’s identity

The video is more than half-way through the calculus coursework, so if you’re unfamiliar with this stuff there’ll probably be some things you don’t understand even if he keeps it simple and don’t go through a more formal proof. The Maclaurin series he’s talking about is just a Taylor series evaluated at x=0, at uni we always call them Taylor series or Taylor expansions but apparently naming conventions differ.

The three videos before that one builds up to this, but if you’re familiar with maths and can remember how to do Taylor expansions and how to deal with trigonometric functions, you should be able to follow this quite easily without watching those as well; I could, as he doesn’t deal with anything here that I haven’t had exams in at a previous point in time. It probably didn’t do any harm that I read 100 pages in Discrete Mathematical Structures this weekend, parts of which contained a brush-up on permutations and factorials (the “!”-thingies in the formulas).

Videos like these were the kind of stuff I had to cut down on a lot during the last month leading up to the exam, those and non-study books. I’m behind on the blogging of the books I’m reading but I’ll get to it.

What do economists learn when taking their education? Most people would probably guess that they/we learn a lot of stuff about markets, industries/firms and some political economy (‘how the economy works’) and such. Maybe something about ‘how to calculate the numbers’. This is another side of the coin. Even though we wouldn’t be asked to go through that proof at an exam, we are (at least some of us) probably expected to know enough math to be able to understand something like this (it depends on the courses). There’s a lot of math and statistics in (some areas of) economics. There’s actually enough to make a guy who voluntarily decides to watch a video like the one above in his spare time think it’s a little too much. Though of course part of the reason why I feel that way is the fact that I suck at math, which is also why I try to get better at it – at least I’m not a math atheist. Now that we are dealing with comics, there’s also this.

June 22, 2011