Research

Publications

Moral Responsibility is Not Proportionate to Causal Responsibility (2022, Southern Journal of Philosophy)

Abstract:

It seems intuitive to think that if you contribute more to an outcome, you should be more morally responsible for it. Some philosophers think this is correct. They accept the thesis that ceteris paribus one's degree of moral responsibility for an outcome is proportionate to one's degree of causal contribution to that outcome. Yet, what the degree of causal contribution amounts to remains unclear in the literature. Hence, the underlying idea in this thesis remains equally unclear. In this article, I will consider various plausible criteria for measuring degrees of causal contribution. After each of these criteria, I will show that this thesis entails implausible results. I will also show that there are other plausible theoretical options that can account for the kind of cases that motivate this thesis. I will conclude that we should reject this thesis.

 

Click here for the PhilPapers page.

Email me for a copy.

Against Resultant Moral Luck (2022, Ratio)

Abstract:

Does one’s causal responsibility increase the degree of one’s moral responsibility? The proponents of resultant moral luck hold that it does. Until quite recently, the causation literature has almost exclusively been interested in the binary question of whether one factor is a cause of an outcome. Naturally, the debate over resultant moral luck also revolved around this binary question. However, we’ve seen an increased interest in the question of degrees of causation in recent years. And some philosophers have already explored various implications of a graded notion of causation on resultant moral luck. In this paper, I’ll do the same. But the implications that I’ll draw attention to are bad news for resultant moral luck. I’ll show that resultant moral luck entails some implausible results that leave resultant moral luck more indefensible than it was previously thought be. I’ll also show that what’s typically taken to be the positive argument in favor of resultant moral luck fails. I’ll conclude that we should reject resultant moral luck.

Click here for the PhilPapers page.

Email me for a copy.

Causation Comes in Degrees (2022, Synthese)

Abstract:

Which country, politician, or policy is more of a cause of the Covid-19 pandemic death toll? Which of the two factories causally contributed more to the pollution of the nearby river? A wide-ranging portion of our everyday thought and talk, and attitudes rely on a graded notion of causation. However, it is sometimes highlighted that on most contemporary accounts, causation is on-off. Some philosophers further question the legitimacy of talk of degrees of causation and suggest that we avoid it. Some hold that the notion of degrees of causation is an illusion. In this paper, I’ll argue that causation does come in degrees.

Click here for the PhilPapers page and to download the paper.

Epistemic Injustice (2020)

Click here for my entry on epistemic injustice in 1000WordPhilosophy: An Introductory Anthology.


Dissertation Summary

Dissertation Summary

In my dissertation, I argue for responsibility internalism. That is, moral responsibility  (i.e., accountability, or being blameworthy or praiseworthy) depends only on factors internal to agents. Employing this view, I also argue that no one is ever blameworthy for what AI does but this isn’t morally problematic in a way that counts against developing or using AI.

Here’s a brief overview of my arguments. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible or blameworthy for what AI does. That is, the so-called responsibility gap exits. However, I argue, this isn’t morally worrisome for developing or using AI. Below, I present summaries of each chapter of my dissertation.

Some philosophers hold that, all else equal, one’s degree of moral responsibility is proportionate to one’s degree of causation (or causal contribution). Call this thesis Proportionality. If causation doesn’t come in degrees, Proportionality is false. So, in chapter one, I discuss whether causation comes in degrees. I argue that it does by showing that all the main objections against graded causation fail and that denying graded causation is theoretically too costly. This chapter of my dissertation has been published in Synthese.

In chapter two, I argue that Proportionality is false despite the fact that causation comes in degrees. To establish this, I employ six plausible criteria for measuring degrees of causation and show that Proportionality understood according to each of these criteria entails implausible results. I also show that there are other plausible theoretical options to account for the kind of cases that motivate Proportionality. This chapter of my dissertation has been published in Southern Journal of Philosophy.

In chapter three, I argue that there is no resultant moral luck (RML). What’s at stake in the debate over RML is best cast in terms of whether causal responsibility increases one’s moral responsibility. I draw attention to previously unexplored implications of RML and argue that these implications leave RML more indefensible than it was thought to be. I also show that what’s typically taken to be the positive argument in favor of RML fails. I conclude that we should reject resultant moral luck. This chapter of my dissertation has been published in Ratio.

Proportionality and RML are the two most plausible positions one could take if causal responsibility is relevant for moral responsibility. Hence, in chapter four, I conclude that causal responsibility is metaphysically irrelevant for moral responsibility, clarify and develop this thesis, and defend it against potential objections.

In chapter five, I argue that neither the epistemic condition nor the control condition presupposes anything external to agents. The epistemic condition rests on the idea, roughly, that one can be morally responsible only if one is aware of certain morally relevant factors. The awareness in question can be knowledge, justified (true) belief, or (true) belief. As it is commonly accepted, knowledge is too strong a requirement for moral responsibility. I follow the reasoning behind this and show that justified (true) belief is also too strong a requirement. I further argue that moral responsibility doesn’t require even true belief. And since the awareness requirement in question presupposes neither justification nor truth, it doesn’t presuppose anything external to agents.

The control condition is the subject matter of the classic free will debate. I survey the leading compatibilist and incompatibilist theories of control and argue that none of them, at least in their most plausible forms, presupposes anything external to agents. A major concern for my argument is that the debate between compatibilists and incompatibilists mainly revolve around determinism. Compatibilists argue that the kind of control required for moral responsibility—i.e., free will—is compatible with determinism, and incompatibilists reject this. Determinism is the idea that at any moment the state of world and the laws of nature entail one unique future. As it stands, determinism is not only a feature internal to agents but a feature of the world. However, I argue, (in)determinism external to agents is irrelevant for the control condition—what matters is only (in)determinism internal to agents. That is, what matters is only whether the mental events in agents are (un)determined, not whether anything else in the universe is.

I conclude that the epistemic condition and the control condition depend only on factors internal to agents. Since I also argued that causal responsibility is irrelevant to moral responsibility, there remains no condition of moral responsibility that depends on anything external to agents. Hence, responsibility internalism is true.

In chapter six, I employ responsibility internalism to weigh in on a debate about responsibility in the context of artificial intelligence. Consider autonomous systems or machines that rely on artificial intelligence such as self-driving cars, lethal autonomous weapons, candidate screening tools, medical systems that diagnose cancer, and automated content moderators. Who is responsible for it when such machines or systems (or AI for short) causes a harm? Given that current AI is far from being conscious or sentient, it is unclear that AI is responsible for a harm it causes. But given that AI gathers new information and acts autonomously, it is also unclear that those who develop or deploy AI are responsible for what AI does. This leads to the so-called responsibility gap: that is, roughly, cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gap exists, and if yes, whether it’s morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gap exists, and it is morally problematic, some argue that it doesn’t exist or that it’s dubious that it exists. Drawing from discussions in the earlier chapters, I defend a novel position. I firstly argue that current AI doesn’t generate a novel concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap exists—that, more precisely, responsibility gap is inevitable and ubiquitous. I also argue this is not morally worrisome for developing or using AI. This is because neither responsibility gap, nor my argument for its existence, entails that no one can be justly held accountable, or no one has a duty in reparations, once AI causes a harm.


Works in Progress

A paper about AI responsibility gap (Under Review)

Abstract: Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gap exists, and if yes, whether it’s morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gap exists, and it’s morally problematic, some argue that it doesn’t exist. In this paper, I defend a novel position. First, I argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap exists but it’s unproblematic.

A paper about the degree-scope response to moral luck (Under Review)

Abstract: Resultant moral luck is typically considered to be the most problematic type of moral luck. Arguably the most popular response to the problem of resultant moral luck is the idea that resultant luck affects the scope but not the degree of responsibility. Call this the ‘Degree Scope Response’ (DSR). Philosophers also use DSR in responding to other types of moral luck and in contexts outside moral luck. In this paper, I argue that DSR fails. Then I suggest that we should hold that resultant luck affects neither the degree nor the scope of responsibility. Put differently, consequences are metaphysically irrelevant to responsibility. Further, I discuss various advantages of this view and show its various implications on questions about free will, theories of causation, and responsibility in contexts outside moral luck. I also defend this view against the worry that it’s too revisionary.

A paper about the causal inefficacy problem in collective action cases (Under Review)

In this paper, I develop a new solution to the problem of inconsequentialism in collective action (or collective harm) cases. To illustrate, consider climate change. We all collectively contribute to its unwanted consequences. But individual actions seem inconsequential: One more or one less person taking a joyride in a gas-guzzler on a Sunday afternoon makes virtually no difference regarding these consequences. But then it’s unclear how there could be moral reasons, let alone duties, for individuals to act against climate change. This is a problem not only for consequentialist theories, but also for Kantian and virtue ethical theories for it’s unclear why it should be unfair, or unvirtuous, to take the joyride if it makes no difference. In response, many authors argue that however insignificant individual contributions might be, they somehow still have moral significance.

I develop a solution that’s contrary to the pull towards this strategy in the literature. I appeal to various real life cases and thought experiments  to draw attention to an underexplored type of action: taking a stand. I show that taking a stand can be morally valuable, and hence morally reason-giving, even if it makes no difference regarding the outcome in question. Hence, I argue, one may have moral reasons for an action even if the action doesn't make any difference. I also explore whether and how well ‘taking a stand’ fits in with various normative ethical theories.

A paper about rejecting resultant moral luck while accepting other sorts of moral luck (Under Review)

Abstract: The most popular position in the moral luck debate is to reject resultant moral luck while accepting the possibility of other types of moral luck. But it’s unclear whether this position is stable. Some argue that luck is luck and if it’s relevant for moral responsibility anywhere, it’s relevant everywhere, and vice versa. Some argue that given the similarities between circumstantial moral luck and resultant moral luck, there’s good evidence that if the former exists, so does the latter. The challenge is especially pressing for the large group that exclusively deny resultant moral luck. I argue that resultant moral luck doesn’t exist even if the other types of moral luck exist. This is because the other types of luck can, but the results of an action cannot, affect what makes one morally responsible.

A paper about the flicker defense against Frankfurt-style cases (Under Review)

Abstract: The Principle of Alternate Possibilities (PAP) says that one is responsible for an action only if one could have acted otherwise. Flicker defense is one promising line of response to Frankfurt-style cases (FSCs) in defense of PAP. Flicker defense is almost as old as FSCs. However, in recent years, it has made an intriguing comeback as some philosophers developed stronger versions of this defense. But this ‘revived’ flicker defense has also recently been criticized. One aim of this paper is to respond to these criticisms. But part of my response requires revising flicker defense. Hence, the other aim is to revise and build an even stronger version of this defense.

A paper on moral rightness and wrongness versus moral praise and blame (Under review)

In this paper, I argue that one can be blameworthy for performing an action that’s right, and praiseworthy for an action that’s wrong. It’s relatively uncontroversial that basic desert responsibility (being apt for praise or blame) is distinct from responsibility in the duty sense (i.e., what’s morally right/wrong). But the extent to which they come apart can be controversial. For instance, it’s typically accepted one may not be praiseworthy (/blameworthy) for an action that’s morally right (/wrong). Yet, it’s also common to think that one can be praiseworthy (/blameworthy) for an action only if it’s morally right (/wrong). But this is false—or so I argue in a novel argument that I call the Argument from Moral Encouragement.

Responsibility Doesn't Require Alternative Possibilities (Draft)

Abstract: The Principle of Alternate Possibilities (PAP) says that one is responsible for an action only if one could have done otherwise. The most widely discussed challenge to PAP comes from Frankfurt-style cases (FSCs). The decades long debate between PAP and FSCs has proved philosophically fruitful in many respects. But it’s also difficult not to get the impression from the literature that the debate has run its course or reached an impasse. In this paper, I present a novel argument that PAP is false.