Moral Responsibility Is Not Proportionate to Causal Responsibility (2022, The Southern Journal of Philosophy)
It seems intuitive to think that if you contribute more to an outcome, you should be more morally responsible for it. Some philosophers think this is correct. They accept the thesis that ceteris paribus one’s degree of moral responsibility for an outcome is proportionate to one’s degree of causal contribution to that outcome. Yet, what the degree of causal contribution amounts to remains unclear in the literature. Hence, the underlying idea in this thesis remains equally unclear. In this paper, I’ll consider various plausible criteria for measuring degrees of causal contribution. After each of these criteria, I’ll show that this thesis entails implausible results. I’ll also show that there are other plausible theoretical options that can account for the kind of cases that motivate this thesis. I’ll conclude that we should reject this thesis.
Click here for the PhilPapers page.
Email me for a copy.
Against Resultant Moral Luck (2022, Ratio)
Does one’s causal responsibility increase the degree of one’s moral responsibility? The proponents of resultant moral luck hold that it does. Until quite recently, the causation literature has almost exclusively been interested in the binary question of whether one factor is a cause of an outcome. Naturally, the debate over resultant moral luck also revolved around this binary question. However, we’ve seen an increased interest in the question of degrees of causation in recent years. And some philosophers have already explored various implications of a graded notion of causation on resultant moral luck. In this paper, I’ll do the same. But the implications that I’ll draw attention to are bad news for resultant moral luck. I’ll show that resultant moral luck entails some implausible results that leave resultant moral luck more indefensible than it was previously thought be. I’ll also show that what’s typically taken to be the positive argument in favor of resultant moral luck fails. I’ll conclude that we should reject resultant moral luck.
Click here for the PhilPapers page.
Email me for a copy.
Causation Comes in Degrees (2022, Synthese)
Which country, politician, or policy is more of a cause of the Covid-19 pandemic death toll? Which of the two factories causally contributed more to the pollution of the nearby river? A wide-ranging portion of our everyday thought and talk, and attitudes rely on a graded notion of causation. However, it is sometimes highlighted that on most contemporary accounts, causation is on-off. Some philosophers further question the legitimacy of talk of degrees of causation and suggest that we avoid it. Some hold that the notion of degrees of causation is an illusion. In this paper, I’ll argue that causation does come in degrees.
Click here for the penultimate version of my paper.
Epistemic Injustice (2020)
Click here for my entry on epistemic injustice in 1000WordPhilosophy: An Introductory Anthology.
In my dissertation, I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being blameworthy or praiseworthy) depends only on factors internal to agents. Employing this view, I also argue that no one is ever blameworthy for what AI does but this isn’t morally problematic in a way that counts against developing or using AI.
Here’s a brief overview of my arguments. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, I argue, this isn’t morally worrisome for developing or using AI. Below, I present summaries of each chapter of my dissertation.
Some philosophers hold that, all else equal, one’s degree of moral responsibility is proportionate to one’s degree of causation (or causal contribution). Call this thesis Proportionality. If causation doesn’t come in degrees, Proportionality is false. So, in chapter one, I discuss whether causation comes in degrees. I argue that it does by showing that all the main objections against graded causation fail and that denying graded causation is theoretically too costly. This chapter of my dissertation has been published in Synthese.
In chapter two, I argue that Proportionality is false despite the fact that causation comes in degrees. To establish this, I employ six plausible criteria for measuring degrees of causation and show that Proportionality understood according to each of these criteria entails implausible results. I also show that there are other plausible theoretical options to account for the kind of cases that motivate Proportionality. This chapter of my dissertation has been published in The Southern Journal of Philosophy.
In chapter three, I argue that there is no resultant moral luck (RML). What’s at stake in the debate over RML is best cast in terms of whether causal responsibility increases one’s moral responsibility. I draw attention to previously unexplored implications of RML and argue that these implications leave RML more indefensible than it was thought to be. I also show that what’s typically taken to be the positive argument in favor of RML fails. I conclude that we should reject resultant moral luck. This chapter of my dissertation has been published in Ratio.
Proportionality and RML are the two most plausible positions one could take if causal responsibility is relevant for moral responsibility. Hence, in chapter four, I conclude that causal responsibility is metaphysically irrelevant for moral responsibility, clarify and develop this thesis, and defend it against potential objections.
In chapter five, I argue that neither the epistemic condition nor the control condition presupposes anything external to agents. The epistemic condition rests on the idea, roughly, that one can be morally responsible only if one is aware of certain morally relevant factors. The awareness in question can be knowledge, justified (true) belief, or (true) belief. As it is commonly accepted, knowledge is too strong a requirement for moral responsibility. I follow the reasoning behind this and show that justified (true) belief is also too strong a requirement. I further argue that moral responsibility doesn’t require even true belief. And since the awareness requirement in question presupposes neither justification nor truth, it doesn’t presuppose anything external to agents.
The control condition is the subject matter of the classic free will debate. I survey the leading compatibilist and incompatibilist theories of control and argue that none of them, at least in their most plausible forms, presupposes anything external to agents. A major concern for my argument is that the debate between compatibilists and incompatibilists mainly revolve around determinism. Compatibilists argue that the kind of control required for moral responsibility—i.e., free will—is compatible with determinism, and incompatibilists reject this. Determinism is the idea that at any moment the state of world and the laws of nature entail one unique future. As it stands, determinism is not only a feature internal to agents but a feature of the world. However, I argue, (in)determinism external to agents is irrelevant for the control condition—what matters is only (in)determinism internal to agents. That is, what matters is only whether the mental events in agents are (un)determined, not whether anything else in the universe is.
I conclude that the epistemic condition and the control condition depend only on factors internal to agents. Since I also argued that causal responsibility is irrelevant to moral responsibility, there remains no condition of moral responsibility that depends on anything external to agents. Hence, responsibility internalism is true.
In chapter six, I employ responsibility internalism to weigh in on a debate about responsibility in the context of artificial intelligence. Consider autonomous systems or machines that rely on artificial intelligence such as self-driving cars, lethal autonomous weapons, candidate screening tools, medical systems that diagnose cancer, and automated content moderators. Who is responsible for it when such machines or systems (or AI for short) causes a harm? Given that current AI is far from being conscious or sentient, it is unclear that AI is responsible for a harm it causes. But given that AI gathers new information and acts autonomously, it is also unclear that those who develop or deploy AI are responsible for what AI does. This leads to the so-called responsibility gap: that is, roughly, cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gap exists, and if yes, whether it’s morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gap exists, and it is morally problematic, some argue that it doesn’t exist or that it’s dubious that it exists. Drawing from discussions in the earlier chapters, I defend a novel position. I firstly argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap exists—that, more precisely, responsibility gap is inevitable and ubiquitous. I also argue this is not morally worrisome for developing or using AI. This is because neither responsibility gap, nor my argument for its existence, entails that no one can be justly held accountable, or no one has a duty in reparations, once AI causes a harm.
Works in Progress
A paper on AI responsibility gap (Under Review)
Abstract: Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm but no one’s responsible for it. Two central questions in the literature are whether responsibility gap exists, and if yes, whether it’s morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gap exists, and it’s morally problematic, some argue that it doesn’t exist. In this paper, I defend a novel position. First, I argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap exists but it’s unproblematic.
A paper on the problem of causal impotence in collective action cases (Under Review)
Abstract: Many of our large-scale problems that arise only recently in human history and in an industrialized global world present us with a unique challenge. Often while people collectively make a difference, individual actions are inconsequential. Consider climate change. We all collectively contribute to its unwanted consequences. But individual actions are inconsequential: One more or one less person taking a joyride in a gas-guzzler on a Sunday afternoon makes no difference regarding these consequences. Donating to charity, voting, buying fair trade products, factory farming, and environmental pollution all present the same challenge. One more or one less vote doesn’t make a difference. But then it’s unclear why individuals should act against climate change or vote. This is the so-called problem of inconsequentialism. In this paper, I present a new solution to this problem by appealing to a type of action that is yet to receive philosophical attention—i.e., taking a stand. I show that taking a stand can be morally valuable and reason-giving even if it makes no difference.
A paper on plausibly rejecting some types of moral luck and accepting others (Under Review)
Abstract: The most popular position in the moral luck debate is to reject resultant moral luck while accepting the possibility of other types of moral luck. But it’s unclear whether this position is stable. Some argue that luck is luck and if it’s relevant for moral responsibility anywhere, it’s relevant everywhere, and vice versa. Some argue that given the similarities between circumstantial moral luck and resultant moral luck, there’s good evidence that if the former exists, so does the latter. The challenge is especially pressing for the large group in the moral luck debate that exclusively deny resultant moral luck. In this paper, I argue that the other types of moral luck exist, but resultant moral luck does not. This is because the other types of luck can, but the results of an action cannot, affect what makes one morally responsible.
A paper on moral rightness and wrongness versus moral praise and blame (Under review)
Abstract: It’s natural to think that one cannot be morally blameworthy for an action unless the action is morally wrong. Consider that often when people are blamed, they’ll retort “I didn’t do anything wrong!” Moreover, many philosophers agree that (N1) if one is morally blameworthy for doing some act, A, then A is morally wrong. Analogously, it’s natural to think that one cannot be morally praiseworthy for an action unless the action is morally right. That is, (N2) if one is morally praiseworthy for doing some act, A, then A is morally right. In this paper, I present a novel argument that (N1) and (N2) are false. Not only is wrongness of an act not necessary for blameworthiness for that act, one can be praiseworthy for performing an act that’s morally wrong. Not only is rightness of an act not necessary for praiseworthiness for that act, one can be blameworthy for performing an act that’s morally right.
A paper on a popular solution to the problem of moral luck (Under Review)
Abstract: Resultant moral luck is typically considered to be the most problematic type of moral luck. Arguably the most popular response to the problem of resultant moral luck is the idea that resultant luck or lucky consequences affect the scope but not the degree of responsibility. Call this the ‘Degree Scope Response’ (DSR). Philosophers also use DSR in responding to other types of moral luck and in contexts outside moral luck. In this paper, I argue that DSR fails. Then I suggest that we should hold that resultant luck affects neither the degree nor the scope of responsibility. Further, I discuss various advantages of this view and show its various implications on questions about free will, theories of causation, and responsibility in contexts outside moral luck. I also defend this view against the worry that it’s too revisionary.
A paper on the principle of alternate possibilities and Frankfurt-style cases (draft)
Abstract: The Principle of Alternate Possibilities (PAP) says that one is responsible for an action only if one could have acted otherwise. The so-called flicker defense is one promising line of response to Frankfurt-style cases (FSCs) in defense of PAP. The flicker defense is almost as old as FSCs. However, in recent years, the flicker defense has made an intriguing comeback as some philosophers developed stronger versions of this defense. But this ‘revived’ flicker defense has also recently been criticized. One aim of this paper is to respond to these criticisms. But part of my response requires revising the flicker defense. Hence, the other aim is to revise and build an even stronger version of the flicker defense.