I think that most people in first-world countries are morally obligated to donate substantial amounts of money to charity. I’m not the first one to come to this conclusion—in fact it’s probably one of the most (in)famous conclusions of recent ethical theorizing. My goal here is not to come with some extremely crazy and novel argument to this effect. Rather, I just want to confess my reasons for holding this conclusion, and hopefully persuade you, my dear and highly valued reader, if you are not already.
Drowning Children
Usually we like to mix it up around here—stay a little hip; stay a little cool—but Singer’s original case is just so straightforward that it seems sort of silly to change it much:
You are walking through a park in your original Air Jordans valued at around $5000 (rookie mistake). As you are strolling, minding your own business, you see a little child fighting for her life! She’s in a shallow pond, but due to being a pathetic little human with legs too short for her own good, she can’t reach the bottom—if only she had learnt to swim. Alas, she didn’t take those fateful swimming lessons, and seeing as there is no one around you, and you forgot your phone at home, you are left with a choice: you can either wade in to save her, completely ruining the value of your shoes, or you can pretend you didn’t see anything and walk past.
I hope most of you agree that you should sacrifice your shoes to save the child—that you are obligated to do so even. Imagine that you are meeting with a friend, and when you meet them, they tell you that they experienced the above, and chose to walk past. You would certainly think that your friend would be very blameworthy. But for not-X to confer blameworthiness, it seems that X has to be obligatory—it makes no sense to consider someone blameworthy for not doing something supererogatory.
Now the argument goes that this seems suspiciously similar to what most people in the first world are doing on a daily basis. As it turns out, charities are able to save a life for the cost of around $5000, so just like you could lose $5000 and save a life in the case described above, you always have the option to do so from the comfort of your own couch. Thus it looks like if we’re obligated to save the child in the first case, we are obligated to do so in the second, unless we have good reason to think they are relevantly disanalogous.
Note that this argument doesn't in any way presuppose utilitarianism or consequentialism. It simply has the structure:
You are obligated to do X in situation A
If there are no morally relevant differences between two situations, then the same obligations hold in the two.
There are no morally relevant differences between situation A and situation B
Therefore you are obligated to do X in situation B
For our purposes, X will be something like “save a life at some financial cost of around $5000”. Situation A will be the drowning-child case from above, and situation B will be something roughly like the situation of an average middle-class person in the first world. Premise 1 simply rests on the intuition that you should save the child in the drowning-child case. I take this to be pretty apparent and uncontroversial. In any case, it’s the sort of thing where it’s very hard to give a substantive argument either way, other than just saying “c'mon, man!” If you don’t agree with this premise, I can do no better than give you an incredulous stare, and the rest of this probably won’t be very convincing to you. Premise 2 looks very plausible, approaching something like a conceptual truth—if there are no morally relevant differences between two situations, then what could explain a difference in moral obligations between the two situations? Anything you point to would then surely just be a morally relevant difference. The the real work, then, lies in defending premise 3. Again, note that none of these premises make any reference to utilitarianism, and you don't have to be a utilitarian or consequentialist to accept this argument—it will work within all or most moral frameworks.
Another point is that if some difference is a morally relevant one, then it looks like it will be morally relevant in any cases where it is present. For example, whether performing an act will bring someone pleasure is a morally relevant difference (if it brings pleasure, you will have stronger moral reason to perform the action, all else equal). This means that in any two cases where the only difference between the two is whether performing some action will bring pleasure or not, there will be stronger moral reason to perform the action if it brings pleasure—there are no cases where bringing pleasure to someone is not a morally relevant difference. This means that a single counterexample to some proposed relevant difference will be sufficient to show that it isn't relevant—even if the counterexample is completely separate from the question of donating to charity.1
Let’s look at some candidates for morally relevant differences between the two, and see whether they hold up:
There’s Not Just One Person to Save
Probably one of the first reactions you will have to this (it was at least one of the first for me), is that there’s more than one person to save in the real world. In fact, there are so many people to save that you couldn’t possibly be expected to save all of them. But if this argument is right, then it looks like we should waste our entire lives trying to save as many people as possible, which is surely absurd!
I do think that it’s too much to say that we should spend our whole lives with the sole goal of saving as many lives as possible, but I also don’t think that means that you shouldn’t try to save a single life, or even a substantial amount of lives: suppose that you are at the pond, and have decided to save the child. You have just yelled “hold on, I’m coming to save you!” and are about to jump in. But just as you’re about to jump, you look behind a shrubbery and notice that the pond is much bigger than your initial estimation; there are thousands—nay, millions—of children drowning in this comically large pond. Having found this out, you say to the child “sorry pal, nothing I can do” and move on. This doesn’t look like the appropriate reaction to me, to put it mildly. The fact that there are many people in need doesn’t make it any less good. right, or obligatory to save the one child.
I do also agree with the countervailing intuition that you don’t have to spend the rest of your life pulling child after child out of this pond. Likewise, you don’t have to quit the job you love, just so you can take a life-saver course in order to min-max your child-saving efficiency. This, I take it, is analogous to working a high-paying job that you don’t necessarily like, in order to be able to donate as much as possible to charity—it would probably be good, but also surely supererogatory. It is clear that you will at some point have done enough for it to be supererogatory to do more, and this point looks to be earlier than the point where saving more children would no longer be a net benefit. But at the same time this point is clearly after having saved at least one child, probably several.2 (Well, really we should be talking of rates, as the amount of children you ought to save throughout your life surely depends on how long your life is.)
So how many children are you obligated to save exactly? Well, I don’t really know, sadly. But as I said, I’m at least pretty sure that the number is greater than 1 and less than [as many as possible]. It at least looks like you should make a reasonable effort—probably to a point where it means you miss out on some leisure, but without making you miserable.
You may object that there is no reason why you should stop at any point, since once you have saved the first child (and changed into your second pair of Air Jordans worth $5000 for some reason), you are in the exact same position as you were one child-saving ago, so if you were obligated then, you’re obligated now. But I don’t think you are in the same situation. For one, your net worth has now decreased by $5000, and furthermore you have just lost $5000 and some time of your life saving a child’s life. I am not sure that both of these are morally relevant, but I’m inclined to think so.
As an analogy, suppose you are walking down a street filled with homeless people who are hungry. You give the first person $10 to buy a sandwich, then given the second $10 to buy a sandwich, and I guess you can see where this is going. It seems like you have strong reason to give the first person the money (I’m not sure that you’re obligated to do so, but somewhere in the neighborhood at least), and also pretty strong reason to give the second. But at some point though, you are surely completely excused if you don’t give anymore money away, and importantly it looks like this point is quite a lot before the point where you are at the same level of wealth as the homeless people.
This can be accounted for by the first difference (that you are poorer after giving the money away), but I think the historical fact that you have already sacrificed some time and money is also relevant. Imagine now that you are walking down this street again, except that you are, say, $1000 richer at the beginning. It seems like you should still give money away, and more than you did before. But importantly, it looks to me like you are fully excused before you reach the level you reached before—you are allowed to walk away with more money than you did before. If this is correct, then an at least plausible account is that the historical fact of your previously having donated is doing the work.
With this it seems like there being an inexhaustible supply of drowning children, or children dying of malaria or whatever, doesn’t obligate you to spend the rest of your life and money in an effort to save them. But at the same time it doesn’t mean that you shouldn’t save anyone, and I think the vast majority of people in the first world are doing less than they ought. If you’re unsure, it’s probably better to err on the side of giving too much, as I suspect you have a pretty strong bias against donating, due to it costing you money (I at least have).
The People are Far Away
Another difference that probably jumps out immediately when you hear the argument is that the child in the pond is very close, while the people you are saving with your donations are very far away. I certainly agree that this is a difference here, but it doesn’t really seem like a morally relevant one. Suppose we modify the situation a little: instead of it being you who is walking past the pond, it is your super-modern-waterproof-remote-controlled-robot-avatar™, which is—on this fateful day—sadly wearing your Air Jordans valued by credible collectors and shoe-enthusiasts at roughly $5000. You, on the other hand, are lying in your sofa at the other side of the earth, controlling this robot through a VR headset (or would it be AR at this point? Or just R?). When you see this child drowning, and are able to save her, are you no longer obligated due to actually being on the other side of the earth? That seems highly dubious! The amount of meters between your flesh-and-blood body and the drowning child just seems completely irrelevant. I mean, imagine that the pond was farther away, but that you had a long stick you could pull her out with—it just looks utterly implausible that your obligation is now weaker than when you were closer to the pond!
This perhaps seems like a bit of an uncharitable—or at least too literal—interpretation of the objection. Maybe the difference is one of causal distance; saving the drowning child is a much more direct action, whereas saving a life through charity is a much longer causal process, where your action of sending money to some charity’s bank account is very far removed from any life being saved. This at least seems a little more promising, but I still think it’s very implausible that it’s morally relevant. Suppose that you are in the footbridge trolley problem: a trolley is hurdling towards 5 people tied to a track—oh the horror! Luckily for them, and unluckily for him, a quite large man is standing on a footbridge going over the track. He is so large, in fact, that pushing him off the bridge and onto the track would result in the trolley coming to a halt, the man dying in the process. You are in a position to push him off—should you do so? Most people have the intuitive reaction that you shouldn’t. But wait, I wasn’t finished! You will not be pushing him with your own two hands. Instead, you can activate an intricate Rube Goldberg machine that will, after a long causal process, result in the man being pushed off the bridge. Should you now cause the man to be pushed? Well, whatever you answered before, I don’t expect that this changed your judgment very much. If you think there’s some relevant difference between sacrificing a person to save some lives and giving money to help someone, we can instead imagine that you are giving money to a homeless person, except that you do it by way of a highly complex Rube Goldberg machine. It doesn’t really seem like that changes the situation in any relevant way (except for adding a bit of ✨pizazz✨ to the whole affair).
Again, I think the intuition that’s driving this objection is still a different one. Specifically, I think the difference that drives the intuition is that saving a life through donations is way more uncertain than pulling a child out of a pond. It’s not clear that my donation actually even causes a child to be saved, since there’s just no firm connection between my pressing a “donate” button and a child not dying from malaria—regardless of how long or short the causal chain is. While I think that it’s true that there’s no perfectly direct connection, or at least how the connection goes, I also don’t think that’s relevant. Rather, I think we should care about the expected outcome of our action, and the expected outcome of donating $1 is that $1’s worth of charitable work will be done (of course accounting for administration costs and the like).
Imagine that you are standing in front of a lollipop vending machine, except that it’s no ordinary lollipop vending machine (uh oh). Unlike most vending machines, this one has an unbelievably complex mechanism that sometimes zaps some amount of nearby people (except for the user of the machine) with lightning, killing them instantly, when a lollipop is bought. Though the machine is incredibly complex, and no one can predict exactly what will happen when a given lollipop is bought, there are many experts who have done extensive statistical analyses and written very detailed reports on the workings of the machine. From this investigative work, they have determined that even though many times, buying a lollipop will do nothing, on average the machine kills 1 person for every 1 lollipop that is bought. Furthermore, every killing is a result of lollipops being bought, and there would be no killings if there were no transactions, even though no killing can be traced back to any particular transaction, or even any reasonably small subset of transaction. Standing in front of this machine with a ravenous desire for lollipops, and cursed with knowledge of the facts just described, you ponder whether you should buy a lollipop.
Assuming that you don’t think it’s worth killing over a lollipop, I think it’s very clear that you shouldn’t. Even though you cannot be sure that there will be any negative consequence from your purchase, and your coin might just end up collecting dust in the coin storage without ever causing any harm, the expected outcome of the action is 1 person being killed and thus you should reason as if that’s what will happen. A way to make this clearer is to consider the value of using the machine something like a million times. If you did this, it would almost certainly result in around 1 million people being killed by the machine. But before each time you use it, you would have all the same evidence and considerations for and against, and so you should obviously consider the value of each press to be a millionth the value of a million presses. I think this point generalizes to positive outcomes as well as negative ones, and so it should be quite clear how this may be adjusted to resemble the case of donating to a charity.3
I do still think there’s one more complication here: while it’s pretty clear what is the harm and who you have harmed when you end a life, it’s not so clear what the benefit consists in or who is benefitted, when saving a life in the way that charities do. After all, when distributing malaria nets, for example, it’s very hard to point to any particular person and say “your life, my dear friend, has been saved by this donation, and you would have been dead, had we not come with these nets”—it’s not clear that there is a fact of the matter as to whether that person would have died or not, or we can at the very least not know it. But this doesn’t seem morally relevant either (you may begin to notice a pattern here). Imagine two situations: 1) unless you press a button, John will die, 2) unless you press a button a person out of a group of 100 people is selected genuinely at random to die. Is there a stronger reason to press the button in (1) than in (2)? It doesn’t seem like it. While in (2) you cannot point to any particular person you saved after pressing the button, not doing so would still have resulted in one of these people dying, and that seems like all that’s relevant.
A last part of this worry is that you may not be responsible for the saving of a life—the people who work for the charity are, and you can’t have several people fully responsible for the same action. While this principle sounds right when you first hear it, I just think it’s clearly false upon further reflection. Say that you are standing in front of a psychopath, nay, a raving maniac—the sort of person who hangs the toilet paper in an “under” position without even flinching. Seeing as they have no regard for all that is good or right in this world, they are certainly not to be trusted with pointy or sharp objects. In fact, they have told you that “if you give me that knife you’re conveniently holding, I’ll go out and kill a person dead with it”, and you have it on very good authority that they’re telling the truth. Seeing that you yourself have little respect for the dignity of human life, you hand over the knife and to no one’s surprise, the maniac goes out and thoroughly stabs a person (they’re not making it). Now the question is who’s responsible. Following the above principle, you conclude that you did nothing wrong, or at least that you bear no responsibility for the murder. That’s just obviously the wrong conclusion! You knew with certainty that this would happen if you acted like you did, and that seems like it is wholly sufficient for making you responsible, regardless of whether it was another person, or an inanimate object that carried out the rest. If you are any less culpable, it looks like that’s only to the degree that you were uncertain what would happen if you gave the maniac the knife. Responsibility may roughly follow expected outcome here, though probably not completely, as there seems to be a degree of moral luck involved in responsibility—but that’s beyond the scope here, and the important point is just that several people can be responsible for a single action.
Should You Steal Money for Charity?
You may worry that this argument licenses stealing money in order to donate it to charity. After all, if you can save a life by donating $5000, you can also save a life by stealing that money and donating it. I’m not so sure that this is the case. Imagine that this time you aren’t wearing any expensive shoes. Rather, there is a gate stopping you from getting to the pond, and to open it, you need to cough up $5000. You don’t have that kind of money, but luckily you are very intimidating. Would you be allowed to find the nearest person and rob them of $5000 to open the gate? I’m not so sure. You could hope that they would be willing to give it to you, but it’s not as clear that you would be allowed to rob them for the money.
I think this is really just a standard unintuitive case for utilitarianism. If you don’t think that we should steal money to save lives, you are probably just not a utilitarian. But remember that we didn’t presuppose utilitarianism in this argument. You can easily accept the obligation to donate under normal circumstances without drawing the additional conclusion that you should steal money to donate—just like you can draw the conclusion that you should pull the lever in the standard trolley problem without drawing the conclusion that you should push the person in the footbridge variation. So if you aren’t a utilitarian, you can easily avoid the additional inference; and if you are a utilitarian, well, then I think it’s pretty straightforward that you are obligated to donate anyways. Either way there’s no reductio against the thesis of this post.
Rich-People Morality
Before we go to the last objection, I’ll tackle a bit of an unserious one: “by donating to charity, you are just feeding your white savior complex,” or “this idea of an obligation to donate to charity is just an excuse for rich people to feel good about themselves”. I think all it takes to refute these objections is to actually imagine taking them seriously. Imagine that you are standing by the pond, and the child is a black child, and you are a white, rich man. Instead of jumping into save the child, you yell “sorry buddy! If I jumped in, I would be feeding my white savior complex. Additionally, I would just be getting an excuse for feeling good about myself for being rich, since I otherwise couldn’t afford to lose these expensive shoes to save you. I hope you can see that my hands are tied here—good luck with finding another way out!” That doesn’t seem like a particularly good excuse to say the least. In fact, I’m pretty sure that the child doesn’t give a single flip about those things right now (she hasn’t learned the word “fuck” yet). We can maybe accept that it would be better if you didn’t feed your huge privileged ego by donating to charity, yet here we are, sadly.
But even if you’re not persuaded by any of this, that doesn’t really matter, as surely you wouldn’t use donations as a way to feed your white savior complex or feel good about being rich. You could surely donate with the proper motivations and awareness of all the underlying malicious reasons for doing so, and thereby do it with a clean heart and soul. What I am arguing here is not that you should condone the way that other people donate to charity, or their motivations for doing so, or anything. Rather, I am arguing that you should donate to charity (as well as everyone else who is fairly well-off)—it’s really irrelevant what you think the reason for others doing so is, so long as you do it with the right intentions.
Come and See the Violence Inherent in the System!!!!
The last objection I’ll try to answer here revolves around the unjust system we live in, and which is in at least some sense the cause of the suffering we are trying to prevent in the first place. I’ll first tackle a pretty weak version of this, which is basically that you shouldn’t donate to charities, since by doing so you are participating in the evil system of capitalism. Just recall the version of the thought experiment from before, where you have to pay to open a gate in order to save the child. It seems like if you don’t have to steal the money, you are obligated to do so. Suppose that it’s actually a joint-stock company that owns the gate and is getting the money. It seems like that doesn’t really change much, and if you said to the child “sorry, I can’t save you since doing so would participate in capitalism, which is evil” I think that would be pretty abhorrent. And I mean, charities are also non-profit, so it’s even less bad in reality than in the case described. If you’re still worried, you can donate through something like GiveDirectly that, while not completely without a middle-man, is pretty close to just, well, giving directly—surely the act of exchanging money is not inherently evil.
A more serious critique is that by donating to charity, you are only treating symptoms, while we should actually treat the root cause, which is unjust political systems. I think there is certainly some merit to this. While I’m no expert in international economics, I take it that at least a large part of global poverty is—at least in part—caused by things like trade restrictions, national debt, historical injustice, etc., so I certainly think that there are many aspects of “the system” that should be changed (though I don’t have the qualifications to know which parts or how they should be changed). The thing is, I don’t think most of the people reading this could even make a dent in such a change, if they spent their entire waking lives trying to do so. Some of the people reading this may be influential politicians, public intellectuals, or something like that. If that’s you, then you could probably have a reasonable chance at making some difference.
But for the rest of us, the best we could do on the systemic-change front is probably encouraging our friends to vote for a certain political party, spread awareness for some cause—and of course donate to organizations that encourage such change. For example, you could donate to organizations like Dansk Vegetarisk Foreninig, CATF, GFI, or even your favorite political party, that try to address root causes, rather than “treating symptoms”. On top of that, even if you only spend time changing the system—say by doing activism or being a politician—then plausibly you would still be obligated to donate money to charity, unless that money could more effectively be spent in your system-changing efforts. You could probably be excused for donating less than if you didn’t spend much of your time on this sort of change, though.
I will say that if this is the point where you jump off, then I’m not particularly sad about that. But if you think that you can actually make a greater difference by trying to cause political change or something, you should also then actually do something. If you think that it would be better to fight capitalism than to save lives through charity, then the conclusion you should draw is not just that you shouldn’t donate to charity and leave it at that, but also that you should spend those resources fighting capitalism instead. This leads nicely into the last part.
Where to Donate
If you are pouring your hard-earned cash into some charity, then you had better hope that it spends your money well. It actually even appears like it might be morally wrong to donate very suboptimally or in a careless manner: as you are walking in the park you find yourself with a single child drowning to your right, and ten drowning to your left. You can save all children in either pond, but you don’t have enough time to save all children in both ponds. While it’s certainly extremely regrettable and horrible that the one child should die, it would seem completely mad to save the one rather than the ten. Similarly, if you with your donation can make ten times as big a difference by donating in charity A rather than charity B, then you should surely do so.
As it turns out, there are big differences in how much bang for your buck you get. For example, if you want to cover the costs of giving someone a guide dog through the aptly named Guide Dogs, it will set you back a cold £38,110. If, on the other hands, you want to cover the cost of a sight-restoring surgery through Cure Blindness Project, that will cost you around $70. With the exchange rates as of writing this, that means that you could restore the sight of around 708 people for the cost of a single “guide dog partnership”. This is of course a bit of an extreme example, but in general there are very large disparities between the most and the least effective charities. Luckily, there are organizations like GiveWell and Giving What We Can that try to estimate the effectiveness of different charities.
It is of course worth noting that you might not think that the way that these organizations estimate effectiveness is very useful. Maybe you think that they have some strange background assumptions, or that they are missing some crucial point, perhaps like the stuff discussed in the last section. Well in that case you should just look at what you think would be effective. If you think that it would do more good to give $5000 to a racial justice organization than to donate malaria nets, then you should do that! In fact, I think you would have even stronger reason to donate than first anticipated, as doing so would literally be better than saving a life then.
I must say that I don’t think such organizations have particularly crazy assumptions, and I trust that they will do a better job at estimating the effectiveness of different charities than I could ever hope to do. So it’s probably best to just defer to the experts, unless you have particularly good reasons to doubt them. Especially since you can just choose to avoid donating to stuff like existential risk prevention, if you think that’s silly, and instead donate to animal welfare or disease prevention organizations.
You may have noticed that I haven't mentioned the words “effective altruism” in this article (well, until right now). The reason for this is that I don’t think that you need to give a label to donating to charity effectively and make it a movement; you don't have to be a finance-bro who buys castles to think that it's good—obligatory even—to help others in need. While making it into a movement may help to motivate certain sorts of people, I think it also turns off many more, since donating to effective charities suddenly acquires all sorts of other associations that you don’t like.4 I suspect many of you have been thinking “God! Another crypto-autist telling me to waste my life in tech or finance, just so I can feel like I’m making some difference. When will these people shut up?”
So if you are one of the people who gets a bad taste in your mouth whenever you hear talk of EA, I want to encourage you to consider whether you are actually think the arguments for the obligation to donate fail, or whether it's just because you have negative associations with the group called Effective Altruists. If it's the latter, then I would just hope that you can separate the negative connotations of the group from the act of giving to an effective charity. From this position, you can maybe consider the arguments impartially, and even if you think they’re ass, it’s at least a step in the right direction that the reason you reject them is due to a flaw of the arguments, rather than a gut reaction against what they represent. It’s of course no secret that I think the arguments do work. If you are of a like mind, then I recommend looking at what charities do a good job of helping others. Even if you are not completely aligned with their values, I don’t think you can go wrong looking at something like GiveWell—from there you can of course do your own critical assessment and research. Even if you are not in a position to give large amounts of money away, anything is better than nothing (quick maths), and I hope you will at least consider it.
Most of this has been in the reasonably abstract, and I think we quickly become desensitized when thinking about all these drowning children. For this reason I think we should end by stopping and remembering what we are actually talking about. Think about a person who’s very close to you, and try to imagine how it would impact you if you got a phone call right now telling you they had just died. How completely devastating would that not be? How much would you not sacrifice to prevent that? Or if you have tried losing someone close to you, how much would you not sacrifice just to be able to hug them one last time and chat over a cup of coffee? These are the sorts of things that you are able to help actual human beings—with lives, feelings, and desires just as real as yours—avoid going through. If I have moved even one person to give even a little bit to help others, then I think the time spent writing this has been well worth it!
A last qualification that I didn’t feel worth including in the main text is this: much of this (if not all of it) will be building off seemings about cases. The reason for this is that this just is how to do ethics—regardless of your metaethical persuasions. The reason to think that pain is bad is that pain seems bad, and the reason to think that we have stronger reason to refrain from harming than to alleviate harm is that it seems like it. To start with some ethical theory and look at cases is putting the cart before the horse—ethical theories are theories to account for the “data points” of our ethical intuitions about cases, not the other way around. I only mention this because I sometimes get comments like “all of this is just a built on nothing but “it seems like” and “it looks like”, rather than substantive argument”, when I share a post about an ethical issue. Yes it’s built on seemings, which is what it should be.
I think Colin McGinn’s article “Our Duties to Animals and the Poor” does a good job of showing why it’s pretty absurd to hold the strong obligations view. For example he has nice examples like that hot women should plausibly prostitute themselves to lonely men, given this sort of strong view. This seems to call for some degree of supererogation. I do think McGinn dismisses the possibility of maintaining some degree of obligation to help the poor without committing to the strong obligation too quickly, though.
You may be thinking “wait, in the last section, you argued that consecutive actions should cause you to act differently”. This is true, but it’s important to separate the axiological from the normative question here. I am arguing that the action of using the machine has the same value each time, just like saving a child has the same value each child. The point I made about saving children was rather a point about what you are obligated to do, regardless of how good that thing is. To make this clearer, we can just imagine that the machine instead did a good thing every time you pressed it. Even if this is the case, you wouldn’t be obligated to spend the rest of your life pressing the button—meaning that obligation and value come apart. What I am arguing in this section is that the expected value of buying a lolipop is 1 person being killed. Likewise the expected value of donating $5000 to charity is one life being saved. The question of how many times you are obligated to do this is separate from the value of the action.
Just to be clear, this is not me condemning the Effective Altruism movement or anything—I think it is probably one of the biggest forces for good in the world. The point is simply that you don’t have to be an Effective Altruist and join this movement in order to buy the arguments for the obligation to donate to effective charities.