Facebook

Thursday, August 6, 2015

Cilantro, Moral Truth, and Justification

Preamble: 
I've been working on this paper for longer than I care to admit but I have to turn it in at some point. I've written about 4 or 5 different versions of it all with different solutions or non-solutions to the puzzle I present. Anyhow, a few notes:

(A) For some reason the footnotes didn't post to the blog so some of my clarificatory points aren't here. Here are two of the important ones. 
     (1) The anti-realist position I'm concerned with is error theory (there are no moral facts and moral propositions have cognitive content). 
     (2) In the last section I talk a lot about "evidence". What counts as evidence in moral arguments would require its own paper so I make some assumptions about what counts: moral judgments, intuitions, principles, and emotions. I'm happy to include other candidates. 

(B) For non-philosophers all you really need to know to understand this paper is what moral anti-realism is. In the simplest terms it's the view that there are no moral facts. Everything is, like, just your opinion, maaaaaaaan!

Cilantro, Moral Truth, and Justification
Appetizers: Anti-realism About Gustatory Facts 
At dinner tables around the world, there is perhaps no issue more divisive than whether cilantro is or is not a delicious garnish. It is the gustatory domain’s own abortion debate. There’s little middle ground and each side views the other as irredeemably wrong. In more reflective moments, most would agree there are no objective facts about the deliciousness of particular foods.  Abe can claim that cilantro is objectively delicious while Bob claims that cilantro is objectively disgusting but the fact of the matter is that there is no fact of the matter! Granting this assumption, is there any way that we could make sense of the idea that either Abe or Bob’s belief is better justified than the other? 

For the moment, I’m going to assume that there isn’t. It seems as though Abe and Bob could offer justifications for why cilantro is subjectively delicious or disgusting but I doubt any of these reasons would convince a third party of cilantro’s objective gustatory properties. Abe and Bob could insist that their arguments and reasons support objective gustatory facts but we’d dismiss their claims as category mistakes—they’re confusing their personal preferences for objective facts about the world. Any argument they give for objective gustatory facts about the world is better interpreted as facts about their subjective gustatory preferences being projected onto the world.

Now consider an analogous moral case and substitute your favorite moral propositions and their opposite for the gustatory ones. For example, Abe claims that it is an objective fact that slavery is a morally good institution while Bob claims the opposite—i.e., that it is an objective fact that slavery is a morally bad institution. If in the cilantro case anti-realism about objective gustatory facts leads us to accept neither competing belief is better justified than the other then it seems that consistency requires that anti-realism about moral facts lead us to also conclude that neither Abe nor Bob’s beliefs regarding slavery is more justified than the other.  Just as there are no objective facts about the deliciousness of cilantro, there are no objective facts about the moral badness or goodness of slavery, and so one position cannot be more justified than the other. Any argument is merely a projection of the interlocutor’s personal preferences or explainable by appeal to facts about their psychology.

There may be some extreme anti-realists out there that are willing to bite the bullet and concede the point.  However, I’m willing to bet that many anti-realists would deny that all moral beliefs are equally well-justified even if moral beliefs can't be objectively true or false. If I'm right then these anti-realists need an account of justification that doesn't depend on the notion of truth. Is this possible?

The framework for this paper is to examine the relationship between moral anti-realism and justification. Suppose we accept that MORAL ANTI-REALISM IS TRUE: There are no objective moral facts. On what bases can we then evaluate competing moral claims? Is justifying objective moral claims analogous to trying to justify objective gustatory claims? That is, since there really are no facts of the matter, one claim is just as well-justified (or unjustified) as the other. The puzzle for the anti-realist is to reconcile the following two assertions: MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.  I will argue that, if we adopt an externalist theory of justification, a Peircian fallibilism offers a potential solution to the puzzle. Before proposing my solution, I will consider and evaluate other attempts to reconcile the two assertions.

A Quick Word on Theories of Justification
Before proceeding we’re going to need to take a brief look at theories of justification and pare down the scope of my inquiry.

Two Theories of Justification
One way to analyze the concept of justification is along internalist/externalist lines. Internalists argue that a belief is justified so long as the believer is able to provide some sort of argument or supporting evidence when challenged. Externalists argue that a belief is justified if it was generated by a reliable belief-forming process, where "reliable" means that the process generates more true than false beliefs in the long run. For example, beliefs formed by visual perception are justified on the externalist view because visual perception generates more true beliefs than false beliefs in the long run.  So, my belief that there is a computer in front of me is justified because it was formed by my seeing it—which is a reliable process. It’s more likely that I’m actually seeing a computer than I am hallucinating it. 

I wish to side-step taking a definitive position on the internalist/externalist debate and suggest that both types of justification are plausible in ethics. We think that a moral belief is justified via normative reasons (internalist) but we also think particular ways of arriving at moral beliefs confer justification. For example, we think that moral judgments arrived at through careful reasoning and/or reflection are more justified than those produced by unreflective emotional knee-jerk reactions. And so it's plausible to think the reliability of the process that produces a moral belief is at least partially relevant to the belief's (relative) justification. For the remainder of this paper, I will grant myself that assumption and constrain the scope of justification to externalist justification. A full treatment of an internalist model in regards to my inquiry requires a paper unto itself—although I suspect internalist theories may have similar problems

Round 1: Externalist Justification Isn't Possible if There Are No Moral Facts
The simple argument against the possibility of an anti-realist account of externalist justification goes something like this. 

P1. Reliability is cashed out in terms of whether a process produces more true than false beliefs in the long run.
P2. Anti-realists deny that moral propositions can be true or false.
C1. So, there's no way to evaluate the reliability of a process when it comes to moral beliefs because the very attributes that we require to measure reliability aren't available.
C2. So, on the anti-realist model all moral beliefs are equally justified (or unjustified).

In short, anti-realists deny the very attribute (truth) required to measure reliability. If we can't know which processes are more reliable than others there is no externalist ground to say one moral belief is better justified than another.  But, again, surely some moral beliefs are better justified than others…but how? 

Reply
Consider the inference rule modus ponens . We know modus ponens to be a reliable belief-forming process from using it in other domains. It is content neutral. Its reliability credentials have been checked out so all we need to do is import it (and similar content-neutral processes) into the domain of ethics. Anti-realists can say that moral conclusions arrived at through modus ponens (or other combinations of formal rules of inference) are more justified than those that aren’t. 

Counter-Reply
Modus ponens and other valid argument structures are contingently reliable processes. That is, if the inputs (i.e., premises) are true then so too will be the outputs. The problem is that the anti-realist has denied the possibility of true inputs in the moral case. If inputs can be be neither true nor false then the conclusions are also neither true nor false. And worse yet, the same argument structure can yield apparently contradictory outputs.

Consider the following examples:
Ethics 1
1E. If you abort a fetus it is wrong. (Neither true nor false).
2E. I aborted a fetus. (Neither true nor false).
3E. Therefore, what I did was wrong. (Neither true nor false)

Ethics 2
1E*. If you abort a fetus it isn't wrong. (Neither true nor false).
2E*. I aborted a fetus. (Neither true nor false).
3E*. What I did wasn't wrong. (Neither true nor false).

It appears as though importing valid argument structures into ethics doesn’t give a solution to the puzzle of reconciling MORAL ANTI-REALISM IS TRUE with NOT ALL MORAL CLAIMS ARE EQUALLY JUSTIFIED.

Round 2: Content-Generating Processes
Perhaps the problem here is the content neutrality of the above processes. We need processes that justify initial moral premises as well as yield conclusions. We have some familiar plausible candidate processes that might confer justification: reflective equilibrium, rational reflection, rational discourse, coherence with existing beliefs, idealizing what a purely rational agent would want, applying the principle of impartiality, universalization, to name a few.

Notice also that we think that some cognitive processes for ethical judgment don’t confer justification—i.e., are unreliable. For example, if I form a moral judgment when I'm extremely angry I might come to reject that belief once I calm down and employ one of the above methods instead. So, a belief arrived at as a consequence of a temporary acute emotional reaction is not well-justified.  

Moral psychology and social psychology are littered with experiments where either the subject or the environment are manipulated to produce judgments that the subject would reject upon learning of the manipulation. This seems to hint at an answer: Beliefs that have been formed by processes that involve manipulation of the environment and/or the subject’s mood are less reliable/less justified than processes that don't involve any obvious manipulations. 

The Same Problem?
This account implies that some processes are more likely to "get it right" than others. But what is it to “get something right” if there's no target for epistemic rightness (i.e., truth)? This seems to be the same problem from Section 1 all over again. If moral beliefs can neither be true nor false, in what way can we say that one process yields more true outputs than another? Sure, we might systematically reject judgments produced by some processes in favor of others, but why  prefer the outputs of one process over another if we can’t say that judgments from one is more likely to be true than another? The grounds for thinking judgments from one process are more justified than those from another seem to be that the judgments from one process are more likely to be true than another.

Round 3: Analogy with Scientific Justification and Fallibilism as a Possible Solution
Harmon argues that there is an important disanalogy between explaining ethical judgments and scientific judgments. We can’t explain why a physicist believes there is a proton without also postulating that there is something “out there” in the world causing the physicist’s observation, which is in turn interpreted as a proton. A moral judgment, on the other hand, can be explained merely by appeal to a subject’s psychology. We needn’t postulate any thing or moral properties “out there” that cause a subject to have a moral belief that x. 

Let’s accept Harmon’s argument. Despite the fact that the causes of scientific and moral judgments might differ, there may be ways in which justification functions similarly between both.
Scientists habitually couch their arguments and conclusions with language that is fallibilist.  Claims and conclusions are presented as provisional, based on the current available evidence and methods. The history of scientific discovery is one of revised and overturned conclusions in light of new evidence and recursive, self-correcting improvements in the scientific method itself.  

From the point of view of internalism vs externalism about justification, we might consider new data as a kind of internalist justification for a claim because they are reasons to believe one thing rather than another. Research methods, on the other hand, can be viewed as instances of the externalist’s processes that confer justification. The idea that some processes are more reliable than others is a familiar idea in scientific research. Claims that derive from methods (i.e., processes) that avoid possible known biases are more reliable and hence more justified than methods that don’t.

For example, the placebo effect is a well-known occurrence in medical science. If patients think they are receiving treatment—even if they aren’t—patients report subjective measures (e.g., reduced pain/discomfort) significantly more positively than non-treatment (i.e., control) groups. We also know that if the researcher knows which patients are in the treatment group and which aren’t, this can influence both the way the researcher asks the patient questions and how they interpret data (they’ll bias toward a positive interpretation). For these reasons we think that the results from medical research that are placebo-controlled and double-blinded are more reliable than those that aren’t. 

In short, data from a study that employs a more reliable process (e.g.,. double-blind, placebo controlled) is more justified than the data from a study that didn’t do either of these things. The more a process avoids known errors, the more justified its conclusions—despite the fact that the blinded study’s conclusions might also eventually be overturned. There is always the background understanding that new and better methods might come about and generate incommensurate data and/or conclusions but this doesn’t undermine the current relative justification that the current methods confer on the outputs beliefs. 

Analogously, moral and social psychology has produced a vast literature showing all the ways we think our moral thinking can go awry.  We know that a cluttered desk can cause us to pass harsher penalties than we would otherwise, that a temporary heightened emotional state greatly influences our judgments, that the feeling of empathy can lead us awry,  and that implicit biases can play important roles in our judgments—to name a few. In short, there are many ways in which it seems like both our basic hard-wiring and various forms of personal and environmental manipulation can cause us to make judgments that we, upon learning of the manipulation or the bias, would likely reject in favor of a judgment arising from unmanipulated deliberation or one of the familiar gold-standard methods of moral reasoning .

Perhaps an anti-realist can think of the activity of moral thinking not as one that aims at discovering some objective truth, rather one that seeks to avoid known cognitive errors and insulate against manipulation. In so far as our judgments derive from methods that avoid (known) cognitive errors and biases, our moral claims are better-justified. This however doesn’t entirely solve the puzzle. We still need to answer why we should choose the output of one type of process over another.  It’s easy to say that the manipulated judgment is “wrong” or is “mistaken” but how do we say this without appeal to truth? “Error” implies a “right” answer. One might just as easily say that the correct judgment is the manipulated or biased one and we are systematically mistaken to adopt the reflective judgment made in a cool dark room. 

The Main Challenge to The Anti-Realist: The moral anti-realist needs some general criteria to explain why we (ought to) systematically endorse judgments from one process rather than another. That we do isn’t enough. We must give an account of why one confers more justification than another.

Peirce, Methods of Inquiry, and The “Fixing of Opinion”
Before suggesting a possible criteria to answer the challenge, I want to sketch out a Peircian analysis of methods of inquiry since it inspired my suggestion. I’ll fill in other details later as they become relevant. Pierce argues that the “fixing of opinion” or “the settlement of opinion” is the sole object of inquiry (Peirce, p. 2-3).  While we needn’t commit to Peirce’s exclusivity claim regarding the purpose of inquiry, Peirce provides useful insight into why we might think some processes confer greater justification than others. I take him to be proposing two related desiderata for our methods of inquiry: that they produce beliefs that are (a) relatively stable over time and (b) relatively impervious to challenge.  The anti-realist can distinguish between processes’ justificatory status to the degree that they they achieve (a) and (b).

This possible anti-realist response to the challenge of justification is not without difficulties. I will explore two related ones. First, it isn’t clear that stability of beliefs is a condition for justification. As standard examples of recalcitrant and dogmatic Nazis or racists show, in fact, we have reason to believe stability might have little to do with justification. Second, I will need to show that stability marks something we think matters to justification: namely, the absence of cognitive errors and inclusion of all relevant evidence. This is the fallibilist aspect of the proposal: Stability needn’t be a proxy for truth tracking, however, it is a reasonable proxy for believing that we are avoiding errors. This is part of the answer but not all of it. 

Peirce notes, stability can be achieved various ways—not all of which we’d think confer justification. The second part of this fallibilist model of justification has to do with the degree to which a process excludes relevant evidence thereby making it susceptible to challenge. Thus, for Peirce, long-run stability requires that the method of inquiry take into account all relevant sources of evidence. A method that excludes forms of evidence will produce beliefs that are more likely to eventually be overturned or require that people be unable to make inferences from one case to another or from one domain to another.  Piece compares 4 methods of inquiry, which aim to produce stable beliefs. In so doing he illustrate how this second criteria (i.e., imperviousness to challenge) works with the first (i.e., stability) to produce a theory of justification.

To achieve stability of belief in the method of tenacity one dogmatically adheres to his chosen  doctrines by rejecting all contrary evidence and actively avoiding contradictory opinions. Coming into contact with others, however, necessarily exposes him to different and conflicting views. This method can’t accommodate ever-present new data without giving up on rationality and the ability to make inferences. Unless we live like hermits, the “social impulse” against this practice makes it unlikely to succeed. The method of authority is the latter method but practiced at the level of the state and with the addition of social control. This method fails to achieve long term stability in that it cannot control everyone's thoughts on every topic. This isn't a problem if people are unable to make inferences from one domain of inquiry into another, but to the extent that they are, stability will be undone and doubt will emerge.

The above two methods have an important common feature in regards to how they achieve stability. In both cases, when I need to decide between two competing beliefs, the method tells me I ought to pick the one that coheres with what I already believe. In other words, there will be cases where I will have to “quit reason” by rejecting contrary evidence and inferences. The above methods for arriving at beliefs systematically exclude relevant evidence in generating beliefs. 

Now, compare these methods with something like wide reflective equilibrium or rational discourse. With these methods, how do we determine what to believe?  Rather than exclusively referring back to what we or the state/religion already endorse, when these methods confront new evidence they adjust output beliefs accordinglyIn the long run, the supposition is that (for example) reflective equilibrium and rational discourse will lead to more stable beliefs than the above two methods because the outputs include the best available total evidence rather than reject it.

Reply to the First Challenge
Let’s restate the first challenge: Stability on its own doesn’t seem to confer justification. There are many way methods of inquiry by which we might achieve stable beliefs, not all of which confer justification. When we defend stability as a justificatory property, the concern is that we’re begging the question: when a process generates stable beliefs that we approve of, we think stability is good; when it generates beliefs we disagree with, we think stability is bad.

To reply to the challenge, let’s consider, for example, reflective equilibrium. With (wide) reflective equilibrium we suppose that by finding an equilibrium between everyone’s principles and considered judgments, in the long run we arrive at a view that no one could reasonably reject (because it also takes their views into account). Stability, on this method, arises as a consequence of taking everyone’s principles and considered judgments into account; i.e., it doesn’t obviously exclude any evidence that might, in the long run, diminish the stability of the beliefs the method produces. And so the critic of stability has a point that stability on its own might not confer justification. What matters is why the view is stable. It’s assumed that a moral view derived from processes that take into account competing view points are stable for the right reasons: they are less impervious to challenge. And they are less impervious to challenge because they don’t exclude evidence (widely construed).

Reply to the Second Challenge
The second challenge is to give positive reasons of thinking long-term stability confers justification. When we make knee-jerk judgments or manipulated judgments we end up with beliefs inconsistent with our other beliefs, and so we have reason to conclude we’ve made an error somewhere—either in endorsing an inference, a judgement, a general principle. Conversely, with a process like reflective equilibrium or rational discourse we eliminate inconsistencies in the long run. By proxy, we’re also eliminating errors in either our inferences, judgments, or general principles; i.e., things that undermine justification. In the long run, the supposition is that, some methods of inquiry (e.g., reflective equilibrium) will lead to fewer and fewer errors, in turn contributing to more and more stable beliefs.

This also helps answer a problem I raised in the first section of the paper.  A reliabilist account of justification has truth built in and so doesn’t help an anti-realist explain how following a valid argument structure can confer justification. If the inputs don’t have a truth value, then neither will the outputs and so one output is just as justified as another. A fallibilist approach provides a solution. Failure to follow a valid argument scheme undermines justification because it indicates an error in reasoning. And so, deliberative belief-forming process that don’t follow valid schemes are making errors and in that respect their outcomes are less justified than those that do. 

Elimination of errors is part of the answer to why we think beliefs derived from one process are better justified than beliefs derived from another. Long-term stability of output beliefs is partly a proxy for absence of errors and so, to the degree that a process generates stable beliefs we have reason to think those beliefs are less likely to contain or be the product of errors

The second part of the answer is similar to the reply to the first challenge: Processes that attempt to exclude classes of evidence or deny certain inferences aren’t going to be stable. There’s an analogy with science: a research method that regularly has its conclusions overturned because it fails to take into account certain sources of evidence is a process that generates beliefs that are less stable in the long run. The beliefs are less stable because important classes of evidence aren’t taken into account or controlled for (for example, various cognitive biases). Similarly, a moral reasoning process that fails to take into account certain sources of evidence (e.g., competing arguments, the fact that we are prone to certain cognitive errors) is also going to generate beliefs that less stable, and by extension, less justified.

Conclusion
The puzzle for the anti-realist is to reconcile commitment to no moral facts with the view that some moral beliefs are more justified than others. If we take an externalist account of justification, a Peircian fallibilism offers a possible solution to the puzzle. Why should we think the the outputs of one process are more justified than another if their output can’t be true? Some processes generate beliefs that are relatively more stable and impervious to challenge in the long run than do other process. Stability, on this model occurs as a consequence of taking into account all the relevant data and avoiding cognitive errors in generating output beliefs. By doing so, outputs are less likely to be overturned by excluded evidence. Stability also is a proxy for absence of error. If a process produces beliefs that are systematically over-ridden, it must be because the outputs are inconsistent with other judgments, beliefs, or inferences. Processes that systematically generate inconsistencies indicate errors, which in turn also undermine stability. 

I’d like to close with the following thought experiment. Suppose both realists and anti-realist agree on which processes confer greater relative justification than others. Would the realism/anti-realism debate matter much? Aren’t comparatively well-justified beliefs (and actions) what we’re really after in ethics?







No comments:

Post a Comment