An Introduction to the Theory of Knowledge

“The theory of knowledge, or epistemology, is one of the main areas of philosophy. […] This book is intended to introduce the reader to some of the main problems in epistemology and to some proposed solutions. It is primarily intended for students taking their first course in the theory of knowledge, but it should also be useful to the generally educated reader interested in learning something about epistemology. I do not assume that the reader has an extensive background in philosophy.”

I’ve read Lemos’ book. It’s always bothersome to blog philosophy and I’ve been uncertain how to best blog this. At the end I decided to add some links covering a lot of the material covered in the book as well, and then add a few comments about the book. I haven’t quoted very much from it, frankly because life’s too short for that. I didn’t rate the book, but would have given it either one star or two if I could make up my mind. If you read all the links below I think you’ll have a pretty good idea about what kind of stuff’s covered in this book. No, I haven’t read the stuff in the links, but from a brief skim of the material included in those articles they seem to deal with many of the same topics and specific issues encountered in the book coverage. I’ve read about many of the topics covered in the book before, but generally in much less detail.

Okay, links first – you should note that most of these links are not to wikipedia articles, but rather to articles from the Stanford Encyclopedia of Philosophy, which I’ve talked about before, as that site has much better coverage of the relevant topics than does wikipedia:

Is Justified True Belief Knowledge? (‘The Gettier problem‘).
Foundationalist Theories of Epistemic Justification.
The Coherence Theory of Truth (/or perhaps better: ‘…of justification’).
Virtue Epistemology.
Inference to the best explanation (/abduction).
Internalist vs. Externalist Conceptions of Epistemic Justification.
A Priori Justification and Knowledge.
The Analytic/Synthetic Distinction.
Naturalized Epistemology.

A quote from the book:

“There are many forms of naturalized epistemology [NE] and it is hard to say exactly what it is. The various forms have different views about the relations between natural science and traditional epistemology. In its most radical forms, naturalized epistemology holds that traditional epistemology should be abandoned or at least replaced by some empirical science, such as psychology. Other less radical forms of naturalized epistemology don’t call for the abandonment of traditional epistemology but hold that the empirical sciences, especially psychology, can solve or help to resolve many of the problems confronting traditional epistemology. […] In general, proponents of naturalized epistemology stress the importance of the natural sciences for epistemological inquiry. […] Instead of focusing on the justification of our beliefs, [Quine, one of the proponents of NE, thinks] we should rather be seeking a scientific explanation of how we get those beliefs. Instead of being concerned with the normative or evaluative status of beliefs we would be concerned with a descriptive inquiry about the psychological processes that produce them. […] Traditional epistemology is concerned with normative or evaluative concepts such as justification, reasonableness, and knowledge. It asks, for example, how do our sensory experiences justify our beliefs about the external world. In contrast, Quine seems to propose that we set aside these normative or evaluative questions, and ask how our sensory experiences cause or bring about our beliefs. Traditional epistemology and the sort of inquiry Quine advocates are thus concerned with different relations between sensory experience and belief.”

This quote is from the last chapter. The word ‘science’ is mentioned exactly once in the first 182 pages of this book. This is perhaps the easiest way for me to express how irrelevant I think many of the thoughts included in the book are to anything ‘real’ or ‘useful’. My impression is that these people are using ill-defined concepts to talk about other ill-defined concepts in order to solve theoretical problems of limited relevance to anyone. Even the chapters and approaches that make some sense are way too lacking in detail to be anywhere near informative enough to be all that interesting; the author uses a lot of pages to say little, and he frequently repeats himself.

The author frequently states in the chapters that ‘it’s obvious that we know X’, where X is some specific thing the author considers it to be obvious that we know, even though the whole book is basically about how we can even determine how to justify knowledge claims in the first place, making it far from obvious whether and how we actually do know what he claims that we know. I often thought along the way when he came up with specific examples of things we obviously knew: ‘…we do? How? You have not defined your terms in a manner clearly enough that that claim can even be evaluated.’

The author often takes it as a given throughout most of the book that what people claim to know and supposedly feel justified in knowing is relevant to how to optimally justify beliefs, or whatever it is he and his colleagues are hoping to do, even though an inquiry like this one really ought to address whether this is even true. It’s not like this problem is not addressed at all, but to say that it is remotely satisfactorily addressed would certainly be a statement with which I would disagree. It’s really problematic because sometimes knowledge beliefs people are claimed to hold are used as arguments justifying approaches to justifying beliefs (‘it’s common for people to believe X (…think themselves justified in believing X), so it must be good and proper to believe X (…)’), without those belief claims being at all closely scrutinized. There’s (almost) no science included in this book on how often people are wrong, what they’re likely to be wrong about and in which situations they’re most likely to be wrong, in which direction they’re likely to be wrong or anything like that, though reliabilism does sort of step closer than the others to such things. The idea of explicitly including such stuff in epistemological research is briefly addressed in the last chapter, as also implied in the quote mentioning Quine above (see also below).

The first nine chapters and most of epistemology is almost exclusively dealing with justification (and arguments), not the question of why people hold the beliefs they do. What you get is a lot of arguments about why X (or Y, or Z…) is clearly the best way to justify evaluating beliefs in a specific manner which is dissimilar from the other approaches available, and why Y (…or X, or Z) is clearly inferior – some are directly related, with one approach being in some sense ‘the opposite’, along some relevant dimension, of one of the others. Arguments are mostly based on very simple logic and/or examples of various kinds meant to illustrate certain aspects which are potentially problematic, or not, to a given argument. There is pretty much zero science to test which of the methodological approaches are more likely to yield accurate beliefs as far as we can even test those, though the literature on the reliabilism methodology, an approach of belief justification where justification is based on whether the processes causing the beliefs are likely to be reliable and so yield accurate beliefs, might have some stuff on that (which is not included in the book).

Some of the claims in the book I have no idea how they even justify making in the first place, which makes it awkward to criticize the ideas presented, especially as some of the most problematic hidden assumptions are implemented implicitly, in some sense before the analysis even begins; you’re supposed to agree on this part for any of the stuff that follows to even make sense, and if you don’t agree, or would perhaps prefer to understand some of the implications of what it might mean to agree before moving on, then you’re in trouble. There seems to me to be a huge number of assumptions hidden all over the place in this book, and it’s really annoying that these assumptions are not addressed; I have a distinct impression that I believe some of those hidden assumptions to be either stupid, wrong, or some combination of the two.

Most of the work in this book, and most of epistemology, it seems from the coverage, apparently deal with the question of how best to justify believing things while completely ignoring all data about how people actually go about forming the beliefs they hold. I find this approach frankly incredibly stupid. But then again this may just relate to the fact that I’m a lot more interested in the latter question than in the former, and some people will find said approach perfectly reasonable. I’m actually really uncertain about who ignores what in the main chapters because it seems to me that a reliabilism approach not informed by actual knowledge about belief formation is completely meaningless, yet the author seems to claim later on that the only people who do not implicitly deliberately ignore data like this are the people belonging to various schools of the naturalized epistemology-branch of epistemology. I’m not confused enough to find out what’s going on here, because I don’t really care.

Although I’m as mentioned not completely certain about the details, it does seem that many epistemologists find it reasonable to ignore where the beliefs people hold come from. I feel like I should remind people reading along that from what I’ve gathered so far, in other areas of research people have often found that when you answer some of the types of questions I’m most interested in in this context, questions about stuff like where beliefs come from, you also automatically tend to answer some of the questions the ‘let’s not use science’ crowd likes to ask (like the question of how justifiable various approaches really are); either that or you demonstrate how some of those questions don’t make sense. It seems to me that the more you know about how beliefs are actually formed, the easier it gets to substitute theoretical models with actual variables of interest; the more you know about why people hold the beliefs they do, the easier it may well be to evaluate specific approaches to how to judge them because as you proceed, you’ll gradually substitute your judgmentalism with actual knowledge. It doesn’t make sense to me to fault people for using a specific approach to belief evaluation which is less likely to yield accurate estimates of what the world is really like than a competing approach, demonstrated in a theoretical framework to be more accurate, if the optimal theoretical framework derived from ‘pure epistemology’ is based on an infeasible model of belief formation. If you’re not justified in believing that flying elephants are real when you’ve been drinking a lot of alcohol, then it might be a good idea to address whether or not you’ve been drinking alcohol when evaluating the beliefs you hold. If you’re more likely to become religious if your parents were religious and if the religious beliefs you hold seem to be socially mediated to some extent, then that likewise seems like relevant information in terms of evaluating how to justify religious beliefs. Stuff, including beliefs and belief-formation processes, whether or not the beliefs in question are ‘moral beliefs’ or not, which you can explain (using data) may be easier to justify, or not justify, whatever the case may be, than stuff you can’t explain. These remarks of course pertain not only to stuff like epistemology but also to other branches of philosophy, like moral philosophy. Of course I’m aware that some people from this field might argue/object that you can’t know that you actually know what I implied that we might get to know from data (like alcohol leading to people being more likely to observe flying elephants) – this stuff is complicated. My point is that I think a lot of it is needlessly complicated, and/or perhaps that it’s ‘the wrong complications’ people are looking at.

Should we justify our beliefs based on whether they agree with other beliefs we have? How can we say, before we’ve figured out to which extent we actually do that? If humans don’t do that kind of thing, then why would you ask a question like that in the first place? If you can’t figure out the extent to which they do, the same question apply – why should we care about the answer to that question? Yet according to the book coherentists don’t seem to care much about how people form beliefs; they mostly care about how people justify their beliefs. Analogous stuff seems to be going on in other contexts. Do note that it’s possible to obtain data both on which beliefs people hold and data on how they justify holding said beliefs, at least in theory (you can just ask people – but there are other ways to approach such questions as well, and you needn’t always have to make do with the lies and confused feedback people might come up with when asked questions like those…); regardless of whether you think epistemologists should only concern themselves with the question of justification (many philosophers seem to hold this view) or whether you’d like them to also address questions pertaining to which beliefs people actually do hold (and how they get to hold them), there is both a ‘descriptive justificationalism’ and a ‘normative justificationalism’ (the latter is just classical epistemology, it seems, judging from the coverage), and if you’re doing only one of those you’re probably missing out on some relevant stuff. Instead of having all those arguments about what’s the proper way to think about these things, why not at least try to address the descriptive part – find some data and figure out how people justify believing the things they do? At first I thought this was the approach called reliabilism in the book, but now I really am not sure what that stuff’s about. Anyway collecting data and starting to figure out how people justify their beliefs would seem to me to be a necessary starting point for any sort of analysis of these sorts of things; the sort of thing these people should have done a long time ago. There are lots of claims about what people may justify believing in the book and how they should go about doing it, but there’s not much data and this field really could use some of that stuff. What if people tend to use some approaches (e.g. coherence) to justify some types of beliefs, but other approaches (reliabilism) to justify others? How is that not potentially relevant? Do some of the main claims of specific theories even make sense, in light of scientific discoveries made over the years? Here’s a related quote from the Stanford Epistemology-article:

“According to an extreme version of naturalistic epistemology, the project of traditional epistemology, pursued in an a priori fashion from the philosopher’s armchair, is completely misguided. The “fruits” of such activity are demonstrably false theories such as foundationalism, as well as endless and arcane debates in the attempt to tackle questions to which there are no answers.” (my emphasis).

I had an impression that foundationalism might just be ‘complicated bullshit’ while reading the book, but I found it really hard to even figure out exactly what these people were actually trying to argue so I decided to withhold judgment. I’m still not sure what exactly they’re arguing, nor for that matter do I understand why they’d ever think it’s a good idea to approach these sorts of questions in the proposed manner, but it’s safe to say that the proponents haven’t exactly convinced me that this framework is the right one. Maybe it goes without saying but I am of course somewhat sympathetic to naturalistic epistemological approaches.

One of the main problems I think I have with this book is a problem the book shares with some other philosophical works I’ve encountered; in this field people seem to have a tendency not to evaluate ideas or arguments based on how well they explain data, but instead mainly on how internally consistent and logically coherent the various theories are. These people consider it to be very relevant if a given theory can handle all potential counterexamples and counterarguments etc.; if you can find a clever idea illustrating that the theory doesn’t work in some specific context because of some implication of what’s already been assumed and/or some contrived example, then you’re golden, but very few people go out to pick up data and look at how the theories relate to those, because data is not the currency of philosophy. If you write a philosophical text, you’d better have an argument ready to explain an elephant carrying around a radio playing Tchaikovsky (an actual example from the book included in one of the chapters to illustrate a problem with a specific theory). Nobody knows if the elephant is relevant because nobody ever seems to bother to look at the data and try to figure out how often people encounter elephants carrying around radios playing Tchaikovsky. I find this frustrating.

I have added one more quote from the book’s last chapter below, as well as some related remarks:

“The limited naturalist holds that defining or giving an analysis of central epistemic concepts such as knowledge, justification, or evidence is a properly philosophical activity. There are also normative questions and issues that are appropriate topics for philosophical investigation. Thus, it is the business of philosophy to discover what makes beliefs justified or reasonable, to discover criteria for justified belief. […] So far this sounds very much like traditional epistemology. But now suppose that we want to know, for example, whether a belief is an instance of knowledge. In order to know whether it was we would need to know whether it met our standard. We would need to know whether it did in fact come from a cognitive process with the appropriate degree of reliability. Presumably, empirical psychology would be relevant to telling us whether our beliefs did in fact meet that standard. Empirical psychology could identify what cognitive processes did in fact produce our beliefs and tell us whether those processes met the requisite standard of reliability. So, according to this view, empirical psychology can be relevant to whether some belief of ours counts as knowledge.”

The above seems, judging from the book, to be ‘as far’ as most epistemologists seem to want to go at the moment. It’s very curious to me that they seem to think cognitive processes is the only thing that may matter, and that cognitive science and psychology is all you really need in order to evaluate beliefs and belief-formation processes. I wonder if these people have ever heard about the problem of how different sources may not be equally reliable in terms of telling you stuff about the world, stuff relevant to belief formation and how to evaluate the knowledge one possesses (Daily Mail vs New England Journal of Medicine)? How sources of different reliabilities may be mixed up with each other in a non-trivial manner, and yet you’re still supposed to come up with some idea about what to think about X? Have they perhaps even heard about statistical analysis?

It seems to me that a lot of scientists these days are working really hard to do exactly the sort of work epistemologists (claim) they’re trying to do, yet repeatedly fail at. Or if you’re more gracious to the work being done in that field, the scientists are doing complementary work of some importance. Are you better justified trusting a scientific report than a random newspaper article, and in which cases might or might you not be? Might there be some sort of systematic way to approach the question of which types of evidence is best when making judgments; might there perhaps even be some sort of natural hierarchical ordering of the scientific evidence (prospective studies > retrospective studies, all else equal? Meta-review of prospective studies > one prospective study, all else equal) available to us, which might be helpful in terms of promoting accurate belief formation (/and belief formation strategies)? This field could be very broad. Perhaps it really is, and some people are working on these sorts of questions. But you wouldn’t know that from this book, and I’m not sure people addressing such far more relevant questions than many of the ones addressed in the book go by the name of epistemologists.

November 27, 2014 Posted by | Books, Philosophy | Leave a comment