Ignorance and Decision-Making, Part Four
Empirical Evidence for the Logical Priority of the Epistemic
This is the fourth part of a multi-part series on ignorance and decision-making. In previous parts of the series, I have described the logical priority of the epistemic, the thesis that relevant ignorance functions as criteria for the culling and sorting of courses of action into a consciously tractable preference ranking, discussed the various philosophical arguments that support the thesis, and used the difficulties of surrogate decision-making to illustrate the thesis.
I have been promising for a few weeks to write a newsletter post that would describe the experimental studies that I have run, together with my main collaborator on the project, Professor Parker Crutchfield of Western Michigan University, to empirically test the thesis. Here is that long-promised post.
This experimental part of the project has produced one publication so far (here’s a non-firewalled preprint version of the same paper). I will focus below on discussing the experiments described in this publication. Anyone interested in the details of these experiments or the data collected is referred to the published paper (or linked preprint).
A second paper describing another series of experiments is currently under review, but discussion of it will have to wait for another occasion. We’re not inclined to post a preprint or otherwise unblind ourselves as authors of this second paper at this time. (Suffice it to say that the experimental results described in this second paper support – or, more carefully, do not falsify – the thesis of the logical priority of the epistemic). We have recently completed a third round of experiments the purpose of which was to replicate with larger sample groups the results of the first two series of experiments. The results of these most recent experiments have yet to be statistically analyzed, so I won’t pronounce upon them here.
I plan to write another post next week, which will probably be the last part of the series for the time being, that will describe the concept of logical priority, compare it with other forms of priority, and explain why logical priority, rather than some other form of priority, is the form relevant to the role of epistemic burdens in decision-making.
According to the thesis of the logical priority of the epistemic, the nature and extent of the decision-maker’s relevant ignorance – the epistemic burdens associated with whatever courses of action might be pursued in some decision context – serve to determine both the courses of action that qualify as choosable options under the circumstances and where remaining options are ranked in the preference rankings from which the decision-maker ultimately chooses. Epistemic burdens pre-consciously shape our preference rankings. Other things equal, courses of action about which the decision-maker is more knowledgeable, those courses of action that carry more surmountable epistemic burdens, are more likely to both qualify as choosable options and rank higher in the decision-maker’s initial conscious preference ranking relative to other options that carry heavier, less surmountable, epistemic burdens.
The logical priority of the epistemic is intuitively plausible. Indeed, as an anonymous reviewer said in their favorable report on the paper linked above, “It is a bit hard for me to imagine any other possibility – how can someone reach a moral judgment about a situation they know nothing about?” The thesis is, to most minds, manifestly apparent. For all its obviousness, however, prior to our work, the thesis had never before been explicated in the various disciplines that investigate decision-making, i.e., philosophy, psychology, economics, and decision theory. It may be obvious (to most minds), but it is nevertheless original. What’s more, to properly substantiate the thesis, it cannot suffice to simply invoke its seeming obviousness or the philosophical arguments we have developed in its defense. It is necessary to test the thesis empirically.
One testable implication of the thesis is that normative judgments – judgments about what some person should or ought to do – depend (inter alia) on the judge’s knowledge and ignorance regarding relevant circumstances, and should tend to change as the judge’s knowledge and ignorance about relevant circumstances changes. Other things equal, the normative judgments that a person makes when ignorant in some way to some extent about relevant circumstances should tend to disconnect from the judgments they make when fully knowledgeable. One way to test the logical priority of the epistemic, therefore, is to set other things equal and see whether persons’ normative judgments do in fact change as their relevant knowledge changes. This is what we did in our first series of experiments.
I assume that most readers are familiar with the famous Trolley Problem thought experiment that philosophers and psychologists use to evoke persons’ moral predispositions. There are many variations on the Trolley Problem, but the original version, due to philosopher Philippa Foot, goes something like this:
You are standing at the point where one train track splits into two tracks. A runaway train is approaching. On the right track are five rail workers. The brakes have stopped working, so the train can’t stop. Next to you is a button that switches tracks.
If you do nothing, the train will hit the five rail workers, who will die.
However, if you press the button, the train will switch to the left track. On the left track is one rail worker. The train will hit the rail worker, who will die, but the train will stop and the five rail workers on the right track will live.
What do you do?
We used this scenario to gauge our experimental subjects’ baseline predispositions toward either consequentialist or deontological moral judgments.
According to consequentialist moral theories, the moral quality of an action is a function of its consequences; the reasons that motivate an action are irrelevant to its moral value. Morally good (bad) actions are those that produce good (bad) consequences, regardless of the quality of the intentions that motivated the action.
On the other hand, deontological theories make the moral value of an action dependent entirely on the reasons that moved the actor to the action; the results of the action are irrelevant to its moral quality. Morally good (bad) actions are those performed for good (bad) reasons, regardless of the quality of the consequences generated by the action.
It is broadly accepted among philosophers and psychologists that a choice to press the button to switch the train to the left track, thereby killing the one person on this track, while preserving the lives of the five persons on the right track, is indicative of a consequentialist inclination. Such a choice indicates a willingness to commit active harm, i.e., to murder the person on the left track, in order to ensure a balance of good over bad consequences.
Conversely, a choice to do nothing, to let the train stay on the right track, leading to the deaths of the five persons on this track and the preservation of the single person on the left track, indicates a penchant for deontological ethics. Such a choice indicates unwillingness to commit active harm, i.e., refusal to be a party to murder, despite the, on balance, bad consequences of the choice.
If there is anything to the thesis of the logical priority of the epistemic, then whether a person otherwise predisposed to consequentialism actually makes a consequentialist judgment in the moment, in some circumstances, depends on their relevant knowledge and ignorance of these circumstances. Someone predisposed to consequentialism can easily make a consequentialist judgment when they possess all of the relevant knowledge. It is more difficult, however, for a baseline consequentialist to offer a momentary consequentialist judgment when they lack some of this knowledge, especially, when they are ignorant of some of the consequences of one or more of the options from which they must choose.
In order to deliberately make a consequentialist judgment between two options, the judge must be able to compare the consequences, which requires that they know the consequences of the competing options. If the judge is ignorant of some of the relevant consequences of the two options, they will struggle to deliberately make the consequentialist choice that they would easily make under more favorable epistemic circumstances. Indeed, when relevantly ignorant, a baseline consequentialist can make a consequentialist choice only luckily, accidentally, or otherwise spontaneously, as it were.
Thus, according to the logical priority of the epistemic, we should observe a disconnect between a consequentialist’s predisposition toward consequentialism, as indicated by their judgments when fully relevantly knowledgeable, and their momentary judgments when relevantly ignorant.
On the other hand, a person disposed to deontological moral judgments when fully relevantly knowledgeable should similarly struggle to offer a deontological judgment in the moment when ignorant of relevant circumstances. Unlike the consequentialist, however, who needs to know the consequences of the options from which they must choose, the deontologist would seem to require knowledge of prevailing moral principles, such as the principle that advises against killing or otherwise harming other persons, in order to ensure that their judgments are properly motivated by these principles.
In order to deliberately make a deontological judgment between two options, the judge must be able to evaluate the quality of the reasons that might motivate choosing one option rather than the other, which requires that they know any relevant moral rules that recommend for or against the competing options. If the judge is ignorant of prevailing moral principles, they will struggle to deliberately make the deontological choice that they would easily make under more favorable epistemic circumstances. Indeed, when relevantly ignorant, a baseline deontologist can make a deontological choice only luckily, accidentally, or otherwise spontaneously, as it were.
Thus, according to the logical priority of the epistemic, we should observe a disconnect between a deontologist’s predisposition toward deontological ethics, as indicated by their judgments when fully relevantly knowledgeable, and the momentary judgments that they offer when relevantly ignorant.
In short, when persons, whether inclined to consequentialism or deontology, are relevantly ignorant, all moral bets are off. If these persons cannot deliberately judge in accordance with their underlying moral preferences, their judgments may spontaneously accord with their predispositions, or they might not.
The logical priority of the epistemic implies nothing specific about how persons’ momentary judgments will be affected by relevant ignorance, just that they will be affected. That is, the thesis doesn’t suggest, e.g., that, when relevantly ignorant, all baseline consequentialists will become momentary deontologists or vice versa, but it does imply that the proportion of consequentialists and deontologists in any given sample of experimental subjects will vary significantly with their relevant knowledge and ignorance.
This is indeed what we found in our experiments.
After presenting subjects with the standard Trolley Problem scenario indicated above, we presented subjects with seven further scenarios in which their knowledge of some of the consequences of either letting the train proceed on the main track or switching the train to the spur track was manipulated.
In some of these scenarios, there was no way for subjects to remedy their relevant ignorance. For example, in the scenario we called “Complete Ignorance,” subjects were informed that there may or may not be persons on either or both tracks, but were not told whether and, if so, how many, persons were on each track. In “Partial Ignorance Both Tracks,” experimental subjects were informed that there were persons on both tracks, but were not told how many persons were on each track.
In other scenarios, subjects were given the potential to overcome their relevant ignorance by choosing to attempt to solve a puzzle that, if solved, would provide them with the missing knowledge. For example, in “Surmountable Ignorance Switch Track,” subjects were informed of the number of persons on the main track, but not whether and, if so, how many, persons were on the spur track. However, they were given the option to learn the number of persons on the spur track by solving a puzzle (taken from a sample SAT test). In “Know-How Ignorance”, subjects were presented with the standard Trolley Problem scenario, but were informed that the button controlling the rail switch was malfunctioning and they would have to solve a logical puzzle (taken from a sample LSAT exam) if they wanted to switch tracks.
This is not an appropriate forum to either describe all of our experimental scenarios or lay out the data, and statistical analysis, in detail. Anyone interested in these particulars can find them in both the published and preprint versions of the paper linked above. Suffice it to say for present purposes that nothing in our results undermines the thesis of the logical priority of the epistemic.
These results indicate that ignorance of relevant consequences affects both those predisposed to consequentialism and those inclined to deontology. Knowledge of relevant consequences appears to enable one’s moral predisposition, but relevant ignorance seems to disable the expression of this predisposition in one’s momentary moral judgments.[1] All moral bets are off when subjects are relevantly ignorant.[2]
[1] In another experiment, described in the second paper mentioned above (currently under review), we blinded subjects to knowledge of prevailing moral principles, rules, and duties, to see how their momentary judgments were affected by ignorance of considerations relevant to deontological choice. If that paper is accepted (or we decide to publicly reveal our authorship), I will try to remember to write another Substack newsletter about the experiments discussed in it.
[2] We conducted another experiment, also described in the puiblished paper, that tested whether relevant ignorance or the need to use personal force (e.g., the need to physically push a person into the path of the oncoming trolley to halt its progress) was more fundamental to normative decision-making. The need to use personal force has been shown to be a statistically significant causal factor in a modified Trolley Problem scenario. Greene et al. (2009) argued that, perhaps by raising the salience of emotional considerations, the need to use personal force is prior to some other factors that also affect moral judgments. (See Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. [2009]. Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111 [3], 364–371).
Our results indicate that ignorance of relevant consequences is at least as fundamental as personal force in moral judgment. If it were possible to blind subjects to the potential need to use personal force in some scenario, we might find evidence that would conclusively decide the question of the fundamentality to moral decision-making of ignorance versus the need to use personal force. Unfortunately, we have yet to devise an experimental scenario that would make subjects ignorant of the potential need to use personal force.