Some of these are penultimate drafts. Please cite the final, published version. The rest are drafty drafts. Cite and circulate if you will, but don’t hold me to them.
Forthcoming in Oxford Studies in Metaethics
Evolutionary debunking arguments start with a premise about the influence of evolutionary forces on our evaluative beliefs, and conclude that we are not justified in those beliefs. The value realist holds that there are attitude-independent evaluative truths. But the debunker argues that we have no reason to think that the evolutionary forces that shaped human evaluative attitudes would track those truths. Worse yet, we seem to have a good reason to think that they wouldn’t: evolutionary forces select for creatures with characteristics that correlate with survival and genetic fitness, and beliefs that increase a creature’s fitness and chances of survival plausibly come apart from the true evaluative beliefs. My aim in this paper is to show that no plausible evolutionary debunking argument can both have force against the value realist and not collapse into a more general skeptical argument. I begin by presenting what I take to be the best version of the debunker’s challenge. I then argue that we have good reason to be suspicious of evolutionary debunking arguments. The trouble with these arguments stems precisely from their evolutionary premise. The most ambitious of these threaten to self-defeat: they rely on an evolutionary premise that is too strong. In more modest debunking arguments, on the other hand, the evolutionary premise is idle. I conclude that there is little hope for evolutionary debunking arguments. This is bad news for the debunker who hoped that the cold, hard scientific facts about our origins would debunk our evaluative beliefs. She has much to do to convince us that her challenge is successful.
Should learning that we disagree about p lead you to reduce confidence in p? Some who think it should want to except beliefs in which you are rationally highly confident. Against this I show that quite the opposite holds: factors that justify low confidence in p also make disagreement about p less significant. I examine two such factors: your antecedent expectations about your peers’ opinions and the difficulty of evaluating your evidence. I close by showing how this initially surprising result can help us think about confidence, evidence and disagreement.
It can be disturbing to realize that your belief reflects the influence of irrelevant factors. But should it be? Such influence is pervasive. If we are to avoid mass agnosticism, we must determine when evidence of irrelevant belief influence is undermining and when it is not. I provide a principled way to do this. I explain why belief revision is required when it is, and why it isn’t when it isn’t. I argue that rational humility requires us to revise our beliefs in response to such evidence. I explain the nature and import of such humility: what it is and what it is to accommodate it. In doing so, I bring to light a little-discussed epistemic challenge, and explain its significance in a way that provides insight into the role of rational humility in our epistemic lives.
I argue that epistemic bootstrapping is an inevitable ugly we must accept if we opt for externalist views like reliabilism. But, I also argue, bootstrapping is not so ugly—or at least reliabilists shouldn’t think that it is. Bootstrapping only looks bad through an internalist lens. This is because the sorts of knowledge that is “too easily” generated by a reliabilist view will generally be quite useless on that view. In particular, we won’t be able to do with it all the naughty things that internalists think we can do with knowledge. In short, it seems worrying that externalists views allow for bootstrapping. But if you are really an externalist, then bootstrapping is nothing to worry about. I close with some thoughts about what a truly externalist position looks like.