I’ve written a lot of stuff about models on this blog in the past, so some of the stuff I’m writing now I’ve probably covered before. I thought it was worth revisiting the subject anyway.
First off, one way to think about a mental model is to consider it a way of thinking about a problem. This also implies that if there’s a problem of some sort, you can construct a model. And thus, from a certain point of view (…the point of view of mathematicians, economists, engineers, or…), there’s always a model. It can be implicit, it can be explicit – but it’s there somewhere. A model is an explanation, and it’s always possible to come up with an explanation. So when you see a model you don’t like, it’s not very helpful to say that ‘it’s only a model’. What else would it be? And so is whatever you’re considering, from a certain point of view. If the model presented is an inaccurate representation of the problem at hand, then it’s the inaccuracy-part that should be the subject of criticism, not the model-part.
Most people dislike formal models that are very specific and give very precise estimates. They know instinctively that these models are simplistic and that the real world is much more complicated than the models – so the perceived over-precise estimates may be way off and may even seem downright silly. Skepticism is warranted, surely. But the precision is also a very helpful aspect of such models, because precision allows us to be demonstratively wrong about something. I’d argue that this is also an important part of why such models are disliked by humans. Many people who’ve worked a bit with models have a quite low regard for formal models because they know the assumptions are driving many of the results. They are skeptical and prefer the models in their own minds. Those ‘mind models’ are much less specific, much more flexible and much less likely to actually generate testable hypotheses. It’s not that they are necessarily wrong – it’s more that they’re unlikely to ever be proven wrong. People who’ve not worked with models also are skeptic when it comes to models, and their mind models are even less specific and testable than the rest.
Here’s the thing: If you think that it makes good sense to be skeptical of models where assumptions are clearly stated beforehand, where parameters/parameter estimates are generated through a clear and transparent process and where limitations are addressed, then you should be a lot more skeptical of models where these conditions are not met.
Most people prefer vague models because they are more convenient. You’re less likely to be proven wrong; you’re less likely to take a stance that are at odds with the tribe; if the model is general enough it will be able to predict anything, making you think that you’re always right. They’re also often less computationally expensive to formulate.
Here’s one hypothesis from a model: ‘Immigrants from country X are 2,5 times as likely to have a criminal record than are non-immigrants.’
Here’s another hypothesis: ‘Immigrants from country X are more likely to have a criminal record than are non-immigrants.’
Here’s a third hypothesis: ‘Some immigrants from country X have a criminal record.’
Here’s a fourth hypothesis: ‘Some people commit crime.’
Which one of these hypotheses has the greatest information potential, that is the potential to tell us the most about the world? The first one, given that all the other three are also true if that one is. Which one is more likely to be considered correct when evaluated against the evidence? The last one.
From an information processing point of view, having nothing but correct beliefs you are certain about is not a good thing. That’s a sign that your models are very poor and don’t contain a lot of information. If you never seem to be (/realize you’re) wrong, that’s a sign that you’re doing things wrong.
Sometimes the ‘models’ we make use of when evaluating evidence is of the variety: ‘I’d like X to be true (because Y, Z), so obviously X is true.’ Sometimes that’s the model you use when you reject the presented formal model with a beta-estimate of 0,21 and a standard deviation of 0,06. This is worth having in mind.
On a related note, of course not all models are about generating hypotheses and testing them – some of them are rather meant to be used to illustrate certain aspects of a problem at hand in a simple and transparent manner. It’s always important to have in mind what the model is trying to achieve. That goes for the ‘mind models’ too. Are you trying to learn new stuff about the world, or are you just trying to be right?