Sunday, November 26, 2017

Why Abortion is (not) Immoral: a followup

This is a followup to my previous post, "A Review of 'Why Abortion is Immoral'".  I want to follow up for two reasons.  First, in my original post I made a serious mistake, which I want to acknowledge, and also explain why I don't think the mistake impacts the overall validity of my original argument.  Second, Peter Donis introduced an interesting new wrinkle in the comments to that post, which I want to discuss at some length.

The mistake I made was claiming that Don Marquis "moved the goal posts" in his justification for why the future-of-value criterion (FOVC) does not imply the immorality of birth control.  That part wasn't wrong; he does move the goal posts, just not where I said he did.  What I said was that his justification required that a future-of-value be bound to a particular thing, and that this was not part of his original criterion.  That was wrong.  It was part of his original criterion, as commenter Publius kindly pointed out.

Here is Marquis original presentation.  I've added a highlight to the part that I missed (or at least forgot about):
What primarily makes killing wrong is neither its effect on the murderer nor its effect on the victim’s friends and relatives, but its effect on the victim. The loss of one’s life is one of the greatest losses one can suffer. The loss of one’s life deprives one of all the experiences, activities, projects, and enjoyments which would otherwise have constituted one’s future.  Therefore, killing someone is wrong, primarily because the killing inflicts (one of) the greatest possible losses on the victim.
He continues, but notice the subtle shift from the third to the first person:
To describe this as the loss of life can be misleading, however. The change in my biological state does not by itself make killing me wrong. The effect of the loss of my biological life is the loss to me of all those activities, projects, experiences, and enjoyments which would otherwise have constituted my future personal life. These activities, projects, experiences, and enjoyments are either valuable for their own sakes or are means to something else that is valuable for its own sake. Some parts of my future are not valued by me now, but will come to be valued by me as I grow older and as my values and capacities change. When I am killed, I am deprived both of what I now value which would have been part of my future personal life, but also what I would come to value. Therefore, when I die, I am deprived of all of the value of my future. Inflicting this loss on me is ultimately what makes killing me wrong.
He does this because he wants one of the consequences of his theory to be that killing hermits is morally wrong.  The only way to do that is to measure the future value of a human life by the quality metric of the person living it.  We don't want to have to find someone else to vouch for us in order to establish our own value.

Marquis continues by concluding:
This being the case, it would seem that what makes killing any adult human being prima facie seriously wrong is the loss of his other future.
This of course does not mean that the FOVC only applies to adult human beings.  The form of the argument is, "what makes the killing of adult human beings wrong is FOVC, therefore FOVC is a valid criterion by which to judge the wrongness of killing, and hence it is wrong to kill anything that values its own future."

Marquis then goes on to list four redeeming qualities of FOVC which I listed in the original post.  The fourth of these is:
In the fourth place, the account of the wrongness of killing defended in this essay does straightforwardly entail that it is prima facie seriously wrong to kill children and infants, for we do presume that they have futures of value.
Note the highlighted words.  This are where he actually moves the goal posts.  It's a subtle but crucial shift, and I think that may be why I missed it the first time around: "WE presume that THEY have futures of value."  Indeed fetuses do have futures of value relative to other people's quality metrics.  But Marquis has explicitly disclaimed this mode of reasoning!   It is not the effect of killing on friends, family, or concerned bystanders that makes killing wrong, it's the negative impact on the victim as assessed by the victim.  This is not an accident; it's the only way to save the hermits.  It's also the only way to not arrive at the conclusion that euthanasia is wrong.

The problem for Marquis is that fetuses do not and cannot possibly value their own lives.  To value anything you have to have a brain, and fetuses don't.  And it's even worse than that: the essential ingredient for valuing things is not a brain but a mind.  (This is why it's generally considered OK to kill brain-dead people and harvest their organs: they have brains, but not minds.)  Newborn babies have brains, but whether nor not they have minds is debatable.  In particular it's debatable whether a newborn human has more of a mind than, say, an adult chicken.  In fact, if you present the question to a chicken in a form that it can understand (e.g. standing over it with butcher's knife in your hand) I'll wager it will give you some pretty definitive indications that it does indeed value its own future.

So not only does FOVC fail to save fetuses, it even fails to save newborns, at least as long as we find it acceptable to kill chickens for food.  Oh well, at least the hermits can breathe a sigh of relief.  (And maybe the chickens if people really start to take Marquis seriously.)

I suppose the reason I missed this is that I was trying to give Marquis the benefit of the doubt, because the theory as he actually presents it is just hopeless.  The only way I can see to salvage it is to accept the moving of the goal posts, accept the premise that babies have futures-of-value because we adult humans say they do, and reason from there, at which point you run into the problem I described in the previous post, namely, that it's hard to decide where to stop the extrapolation backwards in time.  If you're going to impute value all the way back to the zygote, why stop there?

Enter Peter Donis with an unusually innovative (by the standards of the abortion debate) proposal to draw the bright line at implantation rather than fertilization.  Note that it is not even worth considering this unless we have already abandoned Marquis's FOVC-AABTV (As Assessed By The Victim).  We have to accept, either as an axiom or as a consequence of some other criterion, that the value of a newborn infant has already crossed the threshold beyond which it is morally wrong to kill it.  Then -- but only then -- we can ask: where was this threshold crossed?

The overwhelmingly most popular answer to this question (by those who accept its premises) is: at conception.  But this has problems with regards to the moral status of frozen embryos, the destruction of which most people do not regard as a moral transgression on a par with murder.  Peter's suggestion of drawing the line at implantation rather than conception is designed to solve that problem, along with several others that depend on events that are common before implantation but rare afterwards.

But this is only a temporary solution.  The problem of the moral status of frozen embryos only exists because we actually have the technology to freeze embryos.  Implantation is only a bright line because we don't yet have the technology to incubate an embryo outside of a womb.  But that constraint is probably only temporary, and it would be nice to have a moral framework that was AW-ready (Artificial Womb) as well as IA- and AI-ready.

The straightforward extrapolation of Peter's implantation criterion to artificial wombs is that an embryo crosses the moral threshold when it is taken out of the freezer and implanted into an artificial womb.  So let's do a thought experiment: a couple decides to have a kid, takes an embryo out of the freezer and puts it (literally!) in the oven.  Let's suppose that this is early days and the technology has not yet advanced to the point where you can order a Mr-Womb machine from Amazon.  You have to pay a company to rent and operate their machine.

Three months in to the process, both parents lose their jobs and are no longer able to pay the bills.  What should happen?

Or suppose that the technology has advanced to the point where you can buy a Mr. Womb for $199 and conduct this entire process in the comfort and privacy of your own home.  Now one day the couple's six-year-old daughter decides she wants a little sister, takes an embryo out of the freezer, pops it in and pushes the button.  Some hours later, the parents wake up and are horrified to discover what little Suzie has done.  They can't afford another child.  If they pull the plug at this point, have they committed murder?

It's also interesting to construct similar thought experiments based on hypothetical "gestational" processes for AIs.  I started writing one of those up and it turned into a very long passage (I think it could actually make a good premise for a science-fiction novel!) so I'm going to set that aside for now.

41 comments:

Peter Donis said...

Very interesting follow-up! Well worth multiple comments, so this will probably be only the first of several from me. :-)

First, a general comment on the desire to have one's morality be "future proof". What, precisely, does that mean? As far as I can tell, you mean by it that whatever moral principles we choose now should still work under any hypothetical state of increased future knowledge or future technology.

I'm not sure I agree with that requirement, because future knowledge and future technology can make us aware of possibilities that we simply weren't aware of before. Since such possibilities are open-ended, I don't see how we can possibly expect to find moral principles that will handle them in the general case. I think we should be willing to allow for the possibility that something we learn in the future might require us to simply abandon a moral principle we thought was right. (Note that this is similar to the way that science must leave open the possibility that some future experiment will force us to abandon a theory we thought was right.)

Second, a general comment on human moral intuitions. Don Geddis, in response to one of your idea-ism posts, raised the possibility that human moral intuitions might not all be consistent--or, to put it more strongly, that it might not be possible to start from human moral intuitions and arrive at a consistent set of moral beliefs in reflective equilibrium. I think that's the case.

But I also think something even stronger is the case. I think it's not possible to find any single moral principle, whether idea-ism or anything else, that handles all cases, even if we pare down the set of human moral intuitions to the point where we can arrive at a consistent set of moral beliefs in reflective equilibrium. Part of the reason for this is that, as I said above with regard to future-proofing, we can't possibly be sure that we can anticipate everything that the future will throw at us.

Another reason, however, is that I don't think human values, even the "pared down" ones we get when we discard obvious outliers (e.g., the belief some religious fundamentalists have that it is justifiable to make war on people who don't share their beliefs), are all commensurable; I don't think there is a single quality metric that captures all of them. That makes it impossible to find a single moral principle that covers all cases, since any such principle would implicitly embody a single quality metric.

I make these general comments not to argue for them, but simply to make it clear where I am coming from. I'll follow up with some more specific comments separately.

Peter Donis said...

Now for some more specific comments.

You are correct that the obvious extension of my implantation "bright line" to artificial wombs is the point at which the embryo is unfrozen and placed into the artificial womb. Your thought experiments all share the same feature: what happens if that action is done without a proper anticipation of the consequences? (In one case, it's the parents not allowing for financial hardship; in the second, it's the child not understanding the implications of what she's doing, and the parents not taking proper precautions.)

My response to these cases is simple: inadvertence is no defense against moral wrongness. If in fact you are creating something it would be morally wrong to destroy when you put the embryo into the artificial womb, then that fact is independent of your state of mind. You can't get out of the obligation you incurred by saying you didn't mean to. If it's an obligation, it's an obligation, and if you can't fulfill it, you have a serious problem.

Prudent owners or renters of artificial wombs would of course take precautions against such mistakes. For example, they would take out an insurance policy before renting an artificial womb that would make the payments if they became unable to. (Prudent owners of such devices would make having such a policy a condition of being able to rent them.) They would put a lock or something similar on the artificial womb so little Suzie couldn't put an embryo in it by mistake.

Of course, this doesn't mean we must in fact be creating something it would be morally wrong to destroy when we put an embryo in the artificial womb--i.e., it doesn't tell us whether or not the bright line at implantation or its equivalent is right. But it doesn't tell us whether it's wrong either. So while I find these thought experiments interesting, I don't think they count either way towards telling us whether pulling the plug on an artificial womb with a developing embryo or fetus inside it constitutes murder.

Don Geddis said...

I appreciate the shout-out from Peter! And to follow on to his comment: Of course, I do understand that you're exploring the Marquis essay / theory, and so trying to judge it on its own terms and see whether it makes sense. All that said, my own view of all morality is that it is a useful rule-of-thumb, when you don't have time for a detailed analysis, and you're looking for a quick answer about what you "should" do.

But if you're actually trying to resolve a difficult quandary, I personally would throw out all the "right / wrong" labels, and all the moral language. (You really can't get "ought" from "is".) At the end of the day, you just drop into a kind of consequential utilitarianism: what are the policy choices we have available to us, what possible future worlds do we predict based on the policy choice we make today, and which set of future worlds do we agree that we prefer? Actually do the hard work to look at what would happen under different scenarios, and choose the world you wish to live in.

I find frustrating, the typical moral reasoning process of: let's first agree on a generic moral rule; then let's find some odd corner cases that violate our intuitions; but now we are stuck with the outcome of those corner cases, because we already agreed on the rule in step #1 (before we considered the odd cases). I don't find that reasoning very compelling.

Everything is grey areas. There is no reason, a priori, to expect the universe to offer bright moral lines. Take, as a simpler example, the question about whether an entity is "alive". There are lots of easy clear cases, such as "lions" and "humans", vs. "rocks" and "ice". But you would be mistaken to think that the universe offers a bright line between "living" and "not living", even though most of the common-sense examples seem easy to categorize. I would suggest there is a similar impossible-in-principle nature to the question of "is it wrong to kill this thing?" There will always be various "things" which can be found right at the border of "maybe right, maybe wrong".

I suppose I can appreciate, at a meta-level, that the moral reasoning style of argument can be effective persuasion to many people, so I acknowledge it as a tool to effect political change. But I certainly wouldn't agree to that reasoning method, if you're actually asking the more difficult question about "what should our policies be?" (As opposed to the more practical question of "how can I get my preferred policy enacted?")

Ron said...

@Peter:

> First, a general comment on the desire to have one's morality be "future proof". What, precisely, does that mean? As far as I can tell, you mean by it that whatever moral principles we choose now should still work under any hypothetical state of increased future knowledge or future technology.

Not quite. I'm happy to restrict it to *plausible* states of technological advancement or scientific discovery. I include in these the development of AIs and AWs, and the discovery of IAs. But I think we can safely ignore, say, time travel into the past.

@Don:

> I find frustrating, the typical moral reasoning process of: let's first agree on a generic moral rule; then let's find some odd corner cases that violate our intuitions; but now we are stuck with the outcome of those corner cases, because we already agreed on the rule in step #1 (before we considered the odd cases). I don't find that reasoning very compelling.

But that's not how it works at all. You have it exactly backwards. You don't start with the rule, you start with what you want the results to be. Then you see if you can find a concise rule whose consequences include all of those results. It's not unlike the process of fitting scientific theories to data, and it's a worthwhile exercise for the exact same reason: high-fidelity data compression just turns out to be tremendously useful. Furthermore, it's useful even if you don't get it exactly right. Newtonian mechanics is wrong, but nonetheless tremendously useful.

In the case of science we're asking: can we explain why the universe behaves the way it does. In the case of ethics and morality we're asking: can we explain why people have the moral intuitions that they have. In the case of science, the ability to find those rules shows that the universe is lawful rather than random or capricious. In the case of morality, the ability to find those rules indicates that our behavior is lawful rather than random or capricious. Exhibiting lawful behavior, and dealing with entities that in turn exhibit lawful behavior, has survival value.

Put it another way: if we both have reliable models of each other's behavior then we have a much better basis for interacting productively with each other (or choosing not to interact with each other) than if we don't. For that reason alone it's worth pursuing such models.

Peter Donis said...

@Ron:
> We have to accept, either as an axiom or as a consequence of some other criterion, that the value of a newborn infant has already crossed the threshold beyond which it is morally wrong to kill it.

I agree with this, but I don't think it follows that the reason why a newborn infant has crossed the threshold has to be because we adult humans say its life has value. In the normal course of development, that newborn infant will become an adult human that values its own life. Up to that point, other adult humans (normally the infant's parents) are guardians of the developing person's future of value. Obviously the parents (or other guardians) can't know exactly what value the person will put on their own life, or what particular aspects of their life will be valuable to them; but that doesn't mean they can't do the best they can to put that developing person in the best possible position to decide what they value about their life once they are mature enough to do so.

In other words, bringing a newborn infant into being creates a moral obligation of guardianship. I think this is consistent with our moral intuitions. But it implies that it is not just an arbitrary choice on the part of adult humans to value the lives of newborn infants; it is a fact about them, the fact that they will become adult humans in the normal course of development. And if that is true for a newborn infant, it would also be true of a developing fetus in the womb (natural or artificial), if the "bright line at implantation" rule is correct.

Peter Donis said...

@Ron:
> In the case of ethics and morality we're asking: can we explain why people have the moral intuitions that they have.

But we already know the answer to that question, at least as a general statement: our moral intuitions are the result of evolution. Sure, we can still work out details, like what specific selection pressures produced particular moral intuitions, or what aspects of game theory explain why those intuitions would have evolved under those selection pressures. But that's just working out the detailed consequences of the general statement.

However, when you proposed idea-ism as a moral principle, I didn't understand you to be claiming that idea-ism explains why we have the moral intuitions we have. (If that was in fact your claim, I think it's obviously false.) I understood you to be claiming that idea-ism is the only single moral principle that captures our moral intuitions well enough to serve as a basis for a consistent morality, while still doing whatever justice to those intuitions that they deserve.

Ron said...

@Peter:

> I don't think it follows that the reason why a newborn infant has crossed the threshold has to be because we adult humans say its life has value.

Of course it doesn't. That's why I was deliberately non-committal about the reason a baby has value. Everyone agrees on that it does. And everyone agrees that uncombined sperm-and-egg doesn't have that same value.

The problem is for people who want to argue that, because a baby has value that sperm+egg does not, that there must therefore be a bright line somewhere between A and B. If you are a bright-liner, and you want to convince others to act on that basis, then it is incumbent on you to say where you think that line is and why.

But I am not a bright-liner so I don't have to say where the line is. (It is incumbent on me to explain how you get from A to B without a bright line, and I believe I have done so.)

Ron said...

@Peter:

> > can we explain why people have the moral intuitions that they have?

> But we already know the answer to that question

Sorry, I didn't phrase that well at all. I didn't mean "why" in the sense of "can we explain it in terms of physics." I meant can we reliably describe/predict (within reasonable error bounds) a person's behavior as a compact set of rules, or do we have to throw up our hands and say, "This person just does whatever the fuck they feel like whenever they feel like and there's just no way of knowing what they are going to do in any particular situation."

Peter Donis said...

@Ron:
> I meant can we reliably describe/predict (within reasonable error bounds) a person's behavior as a compact set of rules

That makes clear what you meant, but then I question whether what you meant falls under the heading of "morality" at all. Morality is supposed to be prescriptive, not descriptive.

Or to put it another way: if I try to derive a prescriptive rule from what you're saying, I come up with: act in accordance with moral rules R (leaving unspecified for the moment exactly which rules those are, since that's an empirical question, not a moral question) because everyone else does, and you want your actions to be predictable by them, and theirs to be predictable by you, so you can cooperate more effectively and thereby survive. (You, as an idea-ist, would probably say "and thereby meme biodiversity can increase".)

But such a prescription has a serious flaw: it leaves us stuck with our current moral intuitions as the best we can do. But our current moral intuitions might not be optimal. They didn't evolve to maximize cooperation. (They certainly didn't evolve to maximize meme biodiversity.) They evolved to maximize our inclusive genetic fitness. So by the above prescription, we are stuck with a set of moral intuitions that evolved to maximize the wrong quality metric. (Even if I am correct that there is no single "right" quality metric, that still doesn't mean maximizing inclusive genetic fitness is optimal.)

Our usual moral intuitions about morality itself don't look like this at all. Our moral intuitions about morality are that we can improve, and that we ought to improve. (Different people have different definitions of "improve", but I don't know of any moral tradition that says humans and human society are just right the way they are.) So even on the view that we ought to respect our moral intuitions, the above prescription doesn't do that.

Peter Donis said...

@Ron:
> (It is incumbent on me to explain how you get from A to B without a bright line, and I believe I have done so.)

Are you referring to idea-ism here? If so, I don't think it does the job, because it requires you to know how meme biodiversity is affected by various actions, and I don't see how we would know that. For example, how do we determine the impact on meme biodiversity if a pregnancy is aborted vs. being carried to term? Is it a function of how far along the pregnancy is? How?

Peter Donis said...

@Ron:
> The problem is for people who want to argue that, because a baby has value that sperm+egg does not, that there must therefore be a bright line somewhere between A and B.

One can adopt bright lines as a practical matter even if one doesn't believe they "really exist". We do that with plenty of things: driver's licenses, the right to vote, age of consent for marriage or sexual activity. We know these bright lines aren't "real", in the sense that being able to responsibly conduct the activity associated with them is not a binary property, it's a continuous process of development, and different people undergo that process at different rates. But we have to make a binary distinction, so we do the best we can to set a reasonable bright line.

Adopting a bright line for "having value" is the same kind of thing. Yes, there is no sharp transition between "not having value" and "having value"; but we have no way of knowing how to determine "how much value" a given embryo has in an individual case. We don't even have the option of testing for it, the way we make people take tests before getting a driver's license, for example. So if we have to make a binary distinction, we have no option but to draw a bright line somehow.

There is also an important distinction here between a "bright line" adopted by an individual person for making their individual choices, and a "bright line" adopted by a whole society for the purpose of regulating everybody's choices. As I said in a comment on the previous post in this series, even if I decided that the "bright line at implantation" rule was the right one for me personally, to guide my own decisions (or to guide what arguments I would make to a significant other or close friend), I might not be confident enough in it to want it to become the law of the land--I might think that allowing individual choice was the best (or least bad) option for society as a whole.

Don Geddis said...

@Peter: "One can adopt bright lines as a practical matter ... driver's licenses"

I'm not quite sure that your example works. In the case of abortion, you seem to be using "bright line" as a legal cliff: the entity has 0% rights prior to a specific event, and 100% rights afterwards.

But that's not even how driver's licenses work. In California, for example, you can first get a "learner's permit" -- after a certain age (15 1/2) -- which allows you to drive only while another licensed driver is a passenger in the car. Then later you are able to drive only yourself (and family members) -- but not non-family members. And then later you're a "fully licensed" driver. But yet even then, while not necessarily a matter of law, you'll find that you are unable to rent a car until after the age of 25. The "right to drive" is a series of steps, not a cliff.

Similarly, current abortion laws offer a series of rights, depending on the level of fetus development. In the early stages, the woman's choice is paramount and the fetus has few rights. By the end, it requires special extreme medical circumstances for the mother's wishes to override the fetus's right to life.

What is wrong with the current (graduated) approach? Why are you looking for a 0%/100% singular event?

Don Geddis said...

@Ron: Thanks for the clarification about moral reasoning. However, it seems to me that you're confusing some different things. Sometimes you talk about attempting to find a "concise" rule. Sometimes you talk about wanting to "reliably describe/predict" behavior. And in any case you seem to think that the alternative is "random or capricious" behavior.

But it seems to me that a giant lookup table of special cases is perfectly predictable. Behavior wouldn't at all be random, and would be easily predictable. But it wouldn't, of course, be concise.

Here is where I think we disagree. I think you're searching for a concise rule, exactly because your description of "how it works at all" is not correct. The actual reason you look for a concise rule (in morality, or in science) is because you start with a smaller "training set" (say, 50 moral scenarios), and you label them with the "right" outcome (using moral intuition), and then you search for a concise rule to cover those cases, yes. But then you use that rule to infer conclusions for a much larger set of scenarios than you have yet considered explicitly.

That's why you look for a "concise" rule, and why you worry about "overfitting", etc. If it was just a matter of lawful and predictable, then you could do just fine with simply empirically gathering all the intuitive moral examples you can find, and listing them in a huge dictionary of some kind. But I think you actually want to find a concise rule, because you want to compel certain moral choices (e.g. consistent ones) in cases where, for example, intuition breaks down and we aren't sure what the results "should" be. (Or maybe where we haven't yet carefully examined the scenario.)

Peter Donis said...

@Don:
> What is wrong with the current (graduated) approach?

You're right that in some cases we can have multiple lines instead of a single one. But each line is still a bright line. There's no state that lets you gradually gain driving privileges as a continuous process. You just have multiple discrete categories (learner's permit, driver's license), each of which has bright line boundaries for legal purposes (even if we know those boundaries are arbitrary choices).

> even then, while not necessarily a matter of law, you'll find that you are unable to rent a car until after the age of 25.

Yes, but that's a private choice by the car rental company, which is not for the purpose of deciding whether it's legal for a person to drive on public roads, but for the purpose of deciding whether the company wants to take the risk of renting its vehicle to that person. That has nothing to do with rights. The right to drive does not mean the right to drive someone else's car without their consent, and any car owner, or at least any prudent one, is going to give or withhold their consent based on factors that are important to them.

> current abortion laws offer a series of rights, depending on the level of fetus development

No, current abortion "law" (I put the scare quotes around it because it's governed by Supreme Court opinions which IMO have interpreted the Constitution out of all recognition, but that's a whole other discussion) offers a series of arbitrary rules to tell States how they can regulate abortion--meaning, how they can draw the bright line that determines when abortion is legal and when it isn't. But there are still only those two categories. Given the nature of this particular thing, I'm not sure how you could have more than those two categories; what would be the equivalent of a "learner's permit" for abortion?

It's true that the various bright lines that are drawn by the various States, as a matter of public policy, are much more complicated and gerrymandered than a simple bright line at conception or implantation or birth. That doesn't make them not bright lines; it just makes it harder to judge on which side of the line particular cases fall.

Ron said...

@Peter:

> Morality is supposed to be prescriptive, not descriptive.

Why can't it be both? Science is both. It describes the world, and also prescribes what we should do in order to achieve certain goals (except that then we call it engineering rather than science).

What science doesn't do is tell us what goals we should pursue, and how to make the trade-offs between conflicting goals. But there's no reason we can't apply the scientific method to that problem, with our moral intuitions about particular situations standing in for data.

> Our moral intuitions about morality are that we can improve, and that we ought to improve,

Sure, but that just begs the question: improve relative to what quality metric? As you yourself pointed out, there is no objectively correct answer to that question. We have to *choose*.

Just because we choose a quality metric that is fixed for all time (e.g. "Get closer to God", "Maximize the diversity of memes") does not mean that we cannot continually improve how well we perform relative to that quality metric.

> One can adopt bright lines as a practical matter even if one doesn't believe they "really exist".

Sure. And that is actually what we've done in our current legal regime with the trimester system. If you recall, this is actually what started this whole discussion: George Will wrote an essay criticizing the trimester system for being arbitrary, and I wrote a rebuttal saying essentially that this was not a valid criticism because there are no actual bright lines, and so *any* bright line you draw will be arbitrary.

Ron said...

@Don:

> But it seems to me that a giant lookup table of special cases is perfectly predictable.

Well, yeah, it would be, but I guess I still haven't made myself clear with regard to my motives. The value is not in your behavior being predictable in principle, or predictable to God. The value is in your behavior being predictable to other humans with whom you interact. They have to make assessments about your likely behavior in order to inform their decisions. The only way they can do that is on the basis of whatever information they can glean about you between when they first encounter you and when they have to decide. There may be a giant lookup table in your brain, but that doesn't help unless the person you're interacting with can access and process that information somehow, and I don't see how that can happen. The reason lookup-table driven behavior seems arbitrary and capricious is not that it is not deterministic, but rather because there's no practical way for anyone you interact with to build a reliable predictive model of it.

> I think you actually want to find a concise rule, because you want to compel certain moral choices

Yes, all else being equal I would like to persuade everyone to adopt my quality metric, but then again, so would everyone else, so I recognize that I am unlikely to sway everyone to my side. But I still think there's substantial value in this exercise because the way that the process of assessing whether or not someone is trustworthy proceeds for a lot of people today, at least in the U.S., is that people ask: are you a Christian? And if the answer is yes then you are deemed trustworthy, and if the answer is no, then you are not (c.f. Roy Moore). There's a trustworthiness pecking order: protestants are at the top, Catholics are nowadays very close to the top (though it was not that long ago that they were much lower on the ladder), Jews are kind of in the middle, Muslims bring up the rear, and atheists are dead last because they are widely believed to have no moral compass at all (notwithstanding all the evidence to the contrary).

Completely independent of actually wanting to bend moral behavior in constructive ways, I think it would be useful for non-religious people to be able to refute the charge of not having a moral compass by actually being able to succinctly describe what their moral system is. I think that's true even if the succinct description is not 100% accurate.

Don Geddis said...

"George Will wrote an essay criticizing the trimester system for being arbitrary, and I wrote a rebuttal saying essentially that this was not a valid criticism because there are no actual bright lines, and so *any* bright line you draw will be arbitrary."

I agree with this 100%. This is my view too.

"can access and process that information somehow, and I don't see how that can happen"

Ah. I guess the scenario I was imagining, was that we might all have common moral intuitions. It would be impractical if we each had a different lookup table, because then determinism wouldn't be enough to give you practical predictability.

But brains are way too complicated for most people to do any first-principle predictions anyway. The vast majority of "other mind" reasoning essentially relies on the fact that you have introspective access to your own brain. So you generally think to yourself, "if I were that person, in that circumstance, what choice would I make?" And then you run your own internal imagination, and your brain tells you what you would do, and then you copy that prediction and assign it to the person in front of you.

Many people start to do very, very poorly with "other mind" prediction, the more different the other people get from their own brain. Forget about the hard case with moral intuition. Take a less emotional one: many extroverts remain blind and confused their whole lives, to the actions and goals of introverts. Or men vs. women. Or straights vs. gays. It takes extreme effort to build a reliable model of another mind that significantly differs from your own, and most people never do that.

So I was imagining a deterministic, predictable moral intuition framework, which was shared among most people, but not necessarily consistent, or easy to summarize in a concise rule. In that way, the predictability comes from the imagination exercise of "what does my own moral intuition say in this case", and then applying it to the other person.

And the match doesn't have to be perfect. You still get practical predictability if the model of the other person is: "same as my moral intuition on thousands of cases, except for a different outcome in the specific cases of abortion, gay rights, etc." You can have a small, concise list of exceptions, even if the thousands of other cases are essentially arbitrary (but shared).

P.S. Yes, I totally get that atheists are dead last on the political ladder of trustworthiness. I'm actually an elected official in the State of California (school board). I've run a campaign, and been elected, on an actual political ballot. I know a reasonable amount about voting behavior, including the minefield of religion.

Ron said...

> I guess the scenario I was imagining, was that we might all have common moral intuitions.

Wouldn't that be nice! But even setting aside the overwhelming difficulties of achieving this noble goal, if it's just a giant lookup table, how would you know if you'd succeeded?

> I'm actually an elected official in the State of California (school board).

Good for you! Now try that in Mississippi.

Don Geddis said...

"how would you know if you'd succeeded?"

Certainly a fair point.

"Now try that in Mississippi."

Even in California, successful politics includes not surfacing controversial issues that aren't immediately relevant to the main topic under discussion. There is no benefit to forcing voters to confront challenging differences, if the goal is actually sideways from that. The trick to successful politics is to emphasize the areas of common agreement, and proceed on those. "Tilting at windmills" is a different objective than actually trying to get stuff done. (And so: religion was not a topic of discussion in my election. But racism was!)

As for Mississippi: voting behavior is extremely tribal. Most registered Democrats would never vote for any Republican, and vis versa. (Look at Roy Moore's continuing support!) How easy is it to get elected in Mississippi if you are openly black? Gay? Female? Conscientious objector?

The idea that voters dispassionately choose the candidate who intellectually offers the best ideas, is already a fantasy. In that context, it's kind of hard for me to get too worked up about the bias against atheists. Nor am I optimistic that establishing a concise and predictable non-religious moral code would do much to move the needle in voting outcomes.

Peter Donis said...

@Ron:
> Why can't it be both? Science is both.

We're using "descriptive" and "prescriptive" in different ways. As I'm using the terms (I'm borrowing from Feynman here, one of his popular articles--I think it was the one about the value of science), "descriptive" means statements of the form "If you do X, Y will happen"; "prescriptive" means statements of the form "Y should happen" or "Z should not happen". Science is only descriptive with this usage; you agree, since you say science can't tell us what goals we should choose. But neither does "morality", on your view, since on your view, "morality" is just statements of the form "If you do X, Y will happen" applied to human actions. It doesn't tell you what should or should not happen.

> improve relative to what quality metric?

There is no single answer to this question. Different people, and different communities, have different quality metrics. Not all of them are compatible; not all of them are even comparable. That's why I said different people have different definitions of "improve".

> We have to *choose*.

But there is no single "we"; there is no single choice that everyone will agree to, or even agree to accept with misgivings. And I don't think there should be. To borrow from Feynman again, to have everybody using the same single quality metric would be to doom future generations to the chains of our present imagination. I think that's a bad idea.

> If you recall, this is actually what started this whole discussion: George Will wrote an essay criticizing the trimester system for being arbitrary, and I wrote a rebuttal saying essentially that this was not a valid criticism because there are no actual bright lines, and so *any* bright line you draw will be arbitrary.

I agree that just saying a certain system is arbitrary is not a valid criticism, since that will be true of any system. And as a matter of public policy, we can't always just let everyone choose their own quality metric. Even in the case of abortion post Roe v. Wade, there is still a legal bright line at birth, so people who believe that a baby isn't a full-fledged person entitled to a person's rights until some time after birth still don't have legal sanction for their belief. (AFAIK such people are very rare today, but that hasn't always been the case historically.) But I do think that enforcing a single bright line on everybody ought to be a last resort, if we absolutely have to in order to have a reasonable society at all. (For example, I would not advocate that we should let killing adult humans be legal because a few people happen to think it's ok.)

Peter Donis said...

> I do think that enforcing a single bright line on everybody ought to be a last resort, if we absolutely have to in order to have a reasonable society at all.

Just to expand on this a little more, this is why I said earlier that, even if I believed a bright line at implantation was the right choice personally, I wouldn't necessarily want to make it the law of the land and thereby force the same choice on everybody. That would require a much higher level of confidence that that bright line was the right one (and I don't think I have that high a level of confidence in it).

Ron said...

@Peter:

> on your view, "morality" is just statements of the form "If you do X, Y will happen"

Huh? What did I say to give you that impression? I absolutely don't believe that.

> > > We have to *choose*.

> But there is no single "we";

Did I say anything to lead you to believe that I thought otherwise?

> there is no single choice that everyone will agree to, or even agree to accept with misgivings.

And yet we need to find a way to get along, even if that way is to start killing each other until the ones who are left manage to get along without killing each other. One of the things I personally would like is to minimize the carnage along the way.

The goal is not to have a *single* quality metric. The goal (for me) is to have the set of quality metric in active use be mutually compatible with each other.

> I would not advocate that we should let killing adult humans be legal because a few people happen to think it's ok.

I agree. But what do we do about the people who disagree? We can't just kill them without being hypocrites. I suppose we could wait for them to kill us, but I don't see that as a good solution either. Maybe we can wait for them to kill us and then hope we can kill them in self-defense first?

Or maybe we could somehow persuade them to adopt a different quality metric.

Peter Donis said...

@Ron:
> What did I say to give you that impression?

This (your first comment in this thread, in the part responding to Don):

"In the case of ethics and morality we're asking: can we explain why people have the moral intuitions that they have."

And then following up with being able to predict other people's behavior so we can interact with them. All such statements are of the form "if you do X, Y will happen", applied to human actions, as I said. None of them tell you what you should or should not want to happen; that has to be put in by hand, so to speak, in order to apply all this knowledge about human moral intuitions and modeling behavior. None of that knowledge tells you what should or should not happen; it just tells you what will or will not happen if you interact with humans in particular ways or confront humans with particular moral dilemmas.

> Did I say anything to lead you to believe that I thought otherwise?

You advocate idea-ism as a single rule. That seems to indicate that you advocate for a single "we", where "we" all accept idea-ism.

Of course, as I just commented, you also say that moral rules, presumably including idea-ism, are just describing and explaining why humans have the moral intuitions they have, not saying what moral intuitions they *should* have. If that's all idea-ism is, then I agree it doesn't imply a single "we"--but it also doesn't imply anything prescriptive at all, as I said above.

You seem to want to have it both ways: you want to say that moral rules just describe the moral intuitions we do have, but you also want to say they're prescriptive and that there is a single quality metric--idea-ism--that we *should* adopt, whether or not people actually adopt it. So maybe I'm just confused about your actual position.

> The goal (for me) is to have the set of quality metric in active use be mutually compatible with each other.

It would be nice if that were possible without a lot of carnage, but human history does not make me optimistic.

> what do we do about the people who disagree?

If they want to disagree within our society, they are breaking our laws and will be treated accordingly. If they don't like that, they can go find some other society, or find a desert island. I don't think society has an obligation to support people who do not accept a basic rule that's required to have a civil society at all.

> We can't just kill them without being hypocrites.

I don't think it's hypocritical to execute someone that you know, to a moral certainty, is a murderer and is not going to accept that murder is wrong no matter what you do. The problem is that no human society has ever actually met that standard with regard to its treatment of people suspected of murder. With the humans we actually have, I agree that the death penalty won't work.

> Or maybe we could somehow persuade them to adopt a different quality metric.

Again, it would be nice if this were possible, but human history does not make me optimistic.

Ron said...

@Peter:

> maybe I'm just confused about your actual position

Yes, you are.

I advanced the utilitarian argument for moral rules (they are useful for predicting other people's behavior) as a response to Don who said that discussing morality is pointless because it won't converge. I was suggesting a way that the discussion could be useful even if it doesn't converge. That doesn't mean I think this is the only possible benefit of discussing morality.

And yes, it's true that I have advanced idea-ism as my own personal moral compass, and I do believe that the world would be a better place if everyone adopted it. But I also recognize that this is very unlikely to happen any time soon. Despite this, I think it's useful for me to be able to point to a succinct description of my own moral compass to refute those who believe that I have no moral compass at all (and should therefore be considered a second-class citizen) because I'm an atheist.

So there are at least three purely utilitarian reasons why I think talking about this stuff is useful. I also believe that introspecting about what is important to you is useful in that it helps you to live a happier more fulfilled life. The hardest part of getting what you want is often just figuring out what it is you want.

> > what do we do about the people who disagree?

> If they want to disagree within our society...

No, I was posing the question globally. This planet is too small and technology is too far advanced for us to resolve fundamental differences by segregating ourselves into enclaves. It just won't work (c.f. North Korea, ISIS, Al Quaeda).

What do we do about someone who really believes in their heart of hearts that they are doing God's work by nuking San Francisco?

@Luke:

> If a choice now will result in less meme-habitat in one year, is it ok because the damage (or lack of … maximization?) is not immediate?

All else being equal, no. But all else is never equal when it comes to abortion. Abortions are never undertaken casually. It is always the case that failing to abort results in some significant negative consequences for the mother, otherwise she wouldn't consider it. So it's *always* a tradeoff between the interests of an existing brain and the interests of a potential future brain. There's no possible algorithm for making that tradeoff, which is why I think the right answer is to defer to the judgement of the already existing brain who is the biggest stakeholder.

> What does idea-ism look like with an obligation to maximize?

Idea-ism already has an obligation to maximize, but there is a lot more to maximizing the bio-diversity of memes than simply creating the maximum number of human brains. Human brains are a necessary but not sufficient ingredient for creating memes. North Korean prison camps, for example, are chock-full of human brains but you're not going to see a lot of art or creative writing or scientific breakthroughs coming out of them.

Peter Donis said...

@Ron:
> I think it's useful for me to be able to point to a succinct description of my own moral compass to refute those who believe that I have no moral compass at all (and should therefore be considered a second-class citizen) because I'm an atheist.

This I agree with wholeheartedly. I might not give quite the same description of my own moral compass that you do of yours, but I certainly would want to refute people who think I don't have one at all because I'm an agnostic. (Which functionally is the same as being an atheist, but I prefer the term "agnostic" for a variety of reasons that are too long to fit into the margin of this post.)

> there are at least three purely utilitarian reasons why I think talking about this stuff is useful

The word "utilitarian" is probably unfortunate here, since in the context of morality it names a particular moral viewpoint and even a particular quality metric within that viewpoint (which is not the same one you advocate), whereas you seem to be using it to refer to reasons you have that are purely pragmatic, not moral.

> I also believe that introspecting about what is important to you is useful in that it helps you to live a happier more fulfilled life.

I think this is also true of me, but I'm not sure it's true of everyone.

> What do we do about someone who really believes in their heart of hearts that they are doing God's work by nuking San Francisco?

Do you mean in a sane civilized world, or the one we actually have?

In a sane civilized world, at our state of technology and interconnectedness, "society" would mean the entire civilized world. Not that it all would be under one government or one set of rules for everything (I think I've already said that I believe that would be a bad idea), but there would be general agreement on some basic principles that were necessary to have a sane civilized world at all. Anyone who broke those rules would basically be exiled from the sane civilized world, and their ability to do harm would be removed. Maybe we find a desert island for them, as I said.

In the world we actually have, there is not general agreement on the basic principles that are necessary to have a sane civilized world at all. That's why we can have China sponsoring North Korea behind the scenes while publicly deploring the unfortunate situation there, and Russia supporting Iran or Syria behind the scenes while publicly deploring the unfortunate situation there. And why we can have the US meddling in all sorts of other countries while complaining when China or Russia do it.

Publius said...

We Appear To Have A Short-Term Memory Issue Here

@Ron
[quote from href="https://goo.gl/cc93k">paragraph 29]
>Note the highlighted words. This are where he actually moves the goal posts. It's a subtle but crucial shift, and I think that may be why I missed it the first time around: "WE presume that THEY have futures of value." Indeed fetuses do have futures of value relative to other people's quality metrics. But Marquis has explicitly disclaimed this mode of reasoning! It is not the effect of killing on friends, family, or concerned bystanders that makes killing wrong, it's the negative impact on the victim as assessed by the victim. This is not an accident; it's the only way to save the hermits. It's also the only way to not arrive at the conclusion that euthanasia is wrong.

The problem for Marquis is that fetuses do not and cannot possibly value their own lives. To value anything you have to have a brain, and fetuses don't. And it's even worse than that: the essential ingredient for valuing things is not a brain but a mind. (This is why it's generally considered OK to kill brain-dead people and harvest their organs: they have brains, but not minds.) Newborn babies have brains, but whether nor not they have minds is debatable. In particular it's debatable whether a newborn human has more of a mind than, say, an adult chicken. In fact, if you present the question to a chicken in a form that it can understand (e.g. standing over it with butcher's knife in your hand) I'll wager it will give you some pretty definitive indications that it does indeed value its own future.

So not only does FOVC fail to save fetuses, it even fails to save newborns, at least as long as we find it acceptable to kill chickens for food. Oh well, at least the hermits can breathe a sigh of relief. (And maybe the chickens if people really start to take Marquis seriously.)


Marquis addresses your objections directly in paragraphs 47 through 61.

47 ".. . .More precisely, the strategy involves arguing that fetuses lack a property that is essential for the value-of-a-future argument (or for any anti-abortion argument) to apply to them."

48 "One move of this sort is based upon the claim that a
necessary condition of one’s future being valuable is that
one values it. Value implies a valuer. Given this one
might argue that, since fetuses cannot value their futures,
their futures are not valuable to them. Hence, it does not
seriously wrong them deliberately to end their lives. "

49 "This move fails, however, because . . .."

54 "Finally, Paul Bassen14 has argued that, even though the
prospects of an embryo might seem to be a basis for the
wrongness of abortion, an embryo cannot be a victim and
therefore cannot be wronged. An embryo cannot be a
victim, he says, because it lacks sentience. His central
argument for this seems to be that, even though plants and
the permanently unconscious are alive, they clearly
cannot be victims. What is the explanation of this? Bassen
claims that the explanation is that their lives consist of
mere metabolism and mere metabolism is not enough to
ground victimizability. Mentation is required. "

55 "The problem with this attempt to establish the absence of
victimizability is . . .."

Perhaps after re-reading paragraphs 47 through 61 you could respond to Marquis' refutation of your arguments.

Ron said...

@Publius:

> Perhaps after re-reading paragraphs 47 through 61 you could respond to Marquis' refutation of your arguments.

I believe I already have, but to recap:

Marquis: "This move fails, however, because of some ambiguities. Let us assume that something cannot be of value unless it is valued by someone. This does not entail that my life is of no value unless it is valued by me."

Well, yeah, I agree with that. Your life has value as long as your brain is habitat for memes, or your existence provides value to other people whose brains are habitat for memes. The problem is that *Marquis* doesn't agree with this. *He* is the one who insists, as I pointed out in the post, that it is *not* the value of your life to others that makes it wrong to kill you ("What primarily makes killing wrong is neither its effect on the murderer nor its effect on the victim’s friends and relatives..."), it is the value of your future life *to you*, which requires there to be a "you". To impute "youness" to a blastocyst requires one to assume what it is that Marquis wishes to prove. It's pure circular reasoning.

Marquis: "The problem with this attempt to establish the absence of victimizability is that both plants and the permanently unconscious clearly lack what Bassen calls “prospects” or what I have called “a future life like ours.”

This has nearly the same problem, as I have also discussed at length: it requires one to assume that the thing that will eventually be capable of valuing things is the same thing as the blastocyst, but not the same thing as sperm+egg.

There's also this interesting twist with plants: humans eat plants, so it is arguable that a plant eaten by a human and incorporated into their body has a future-like-ours.

Peter Donis said...

@Ron:
> it is the value of your future life *to you*, which requires there to be a "you".

Marquis addresses this too, in the latter part of 49:

"Furthermore, my future can be valuable to me even if I do not value it. This is the case when a young person attempts suicide, but is rescued and goes on to significant human achievements. Such young people’s futures are ultimately valuable to them, even though such futures do not seem to be valuable to them at the moment of attempted suicide. A fetus’s future can be valuable to it in the same way."

It is true that Marquis argues earlier, in 23, that killing is wrong because of its effect on the victim: but his discussion there, and later on when he discusses cases like people who are unconscious or in a coma, makes it clear that the effect on the victim--the loss to the victim--is not a matter of what the victim thinks the loss is at the time it happens. It's the loss of the victim's future, independently of what the victim thinks about that loss at the time, or even whether the victim is capable of thinking at all.

So I don't agree that Marquis's position is that it is the value of your future life *to you* that counts, unless "to you" is interpreted very broadly. Requiring there to be a "you" at the time of the loss makes the interpretation of "to you" too narrow, at least as Marquis is expressing his position.

Ron said...

@Peter:

Yes, I get all that. The problem for Marquis is not that a blastocyst doesn't have a brain. Well, that's a problem too, but it's not his main problem. His main problem is that in order to have the blastocyst have a future-of-value but sperm+egg not, Marquis has to assume the thing which he wishes to prove, namely, that conception is the event which produces the thing which has a future-of-value. That makes his reasoning circular.

Peter Donis said...

@Ron:
> His main problem is that in order to have the blastocyst have a future-of-value but sperm+egg not, Marquis has to assume the thing which he wishes to prove, namely, that conception is the event which produces the thing which has a future-of-value. That makes his reasoning circular.

I don't think your objection is specifically that he assumes the thing which has a future-of-value is produced at conception. I think your objection is that his viewpoint requires that there is a sharp transition somewhere, and that somewhere has to be early enough to make practically any abortion wrong. Implantation would serve that purpose just as well as conception; but I think you would say that a sharp transition at implantation is open to the same objection that you make here about a sharp transition at conception.

Peter Donis said...

> I think you would say that a sharp transition at implantation is open to the same objection that you make here about a sharp transition at conception.

Perhaps a better way to put this is: Marquis's argument requires there to be a sharp transition, but he gives no reasons why conception is that transition. When I proposed that implantation be treated as the event where the thing with a future of value is produced, I gave reasons for that; it wasn't just an arbitrary choice. But as far as I can see, Marquis does not do anything like that in his paper.

Ron said...

@Peter:

> I think your objection is that his viewpoint requires that there is a sharp transition somewhere

No, my objection is that he *asserts* that there is a sharp transition with no justification.

I don't know whether his position requires a bright line or not. That' not for me to say.

Luke said...

@Ron:

I missed this because you put your response in a different blog post than my question.

> So it's *always* a tradeoff between the interests of an existing brain and the interests of a potential future brain. There's no possible algorithm for making that tradeoff, which is why I think the right answer is to defer to the judgement of the already existing brain who is the biggest stakeholder.

How often is there an actual algorithm? That seems like a rather high standard. If one lowers it to what other moral systems actually deliver, I wonder if idea-ism then would have an answer. If not, one might question how many other places idea-ism cannot actually offer much guidance. Because life is chock-full of tradeoffs. (I suspect too many moral systems don't really acknowledge this in a way that provides anything approaching comprehensive guidance.)

> Idea-ism already has an obligation to maximize, but there is a lot more to maximizing the bio-diversity of memes than simply creating the maximum number of human brains. Human brains are a necessary but not sufficient ingredient for creating memes. North Korean prison camps, for example, are chock-full of human brains but you're not going to see a lot of art or creative writing or scientific breakthroughs coming out of them.

That's fine; I don't need an algorithm. But surely you can say something more than the above, when it comes to idea-ism's obligation to maximize and how that interfaces with abortion? If the obligation to maximize has no interesting bite (because say you're relying on people's current judgment all over the place), then one can rightly question whether it truly exists.


I'm just trying to get you to flesh out idea-ism more, here. The devil is always in the details.

Publius said...

Natural Property Future Of Value

@Ron
>The form of the argument is, "what makes the killing of adult human beings wrong is FOVC, therefore FOVC is a valid criterion by which to judge the wrongness of killing, and hence it is wrong to kill anything that values its own future."

It's a subtle but crucial shift, and I think that may be why I missed it the first time around: "WE presume that THEY have futures of value." Indeed fetuses do have futures of value relative to other people's quality metrics. But Marquis has explicitly disclaimed this mode of reasoning! It is not the effect of killing on friends, family, or concerned bystanders that makes killing wrong, it's the negative impact on the victim as assessed by the victim.. . .

You need to read it more carefully.

The form of the argument is:
1) That natural property that explains the wrongness of killing is the loss to the victim of the value of his future.
2) If a being is in the category "having a valuable future like ours," then it is wrong to kill it. (see paragraph 31).

The problem for Marquis is that fetuses do not and cannot possibly value their own lives. To value anything you have to have a brain, and fetuses don't. And it's even worse than that: the essential ingredient for valuing things is not a brain but a mind. (This is why it's generally considered OK to kill brain-dead people and harvest their organs: they have brains, but not minds.)

Marquis' argument doesn't depend on the fetus valuing its own life. He brings up your argument in paragraph 48:

One move of this sort is based on the claim that a necessary condition of one's future being valuable is that one values it. Value implies a valuer. Given this one might argue that, since fetuses cannot value their futures, their futures are not valuable to them. Hence it does not seriously wrong them to deliberately end their lives. [paragraph 48]

Marquis then disproves your argument in paragraph 49:

This move fails, however, because of some ambiguities.
Let us assume that something cannot be of value unless it
is valued by someone. This does not entail that my life is
of no value unless it is valued by me. I may think, in a
period of despair, that my future is of no worth
whatsoever, but I may be wrong because others rightly
see value—even great value—in it. Furthermore, my
future can be valuable to me even if I do not value it. This
is the case when a young person attempts suicide, but is
rescued and goes on to significant human achievements.
Such young people’s futures are ultimately valuable to
them, even though such futures do not seem to be
valuable to them at the moment of attempted suicide. A
fetus’s future can be valuable to it in the same way.
Accordingly, this attempt to limit the anti-abortion
argument fails.


Marquis' argument does not depend on the fetus being able to value its own life.

Publius said...

Category Error

@Ron:
>Your life has value as long as your brain is habitat for memes, or your existence provides value to other people whose brains are habitat for memes. The problem is that *Marquis* doesn't agree with this. *He* is the one who insists, as I pointed out in the post, that it is *not* the value of your life to others that makes it wrong to kill you ("What primarily makes killing wrong is neither its effect on the murderer nor its effect on the victim’s friends and relatives..."), it is the value of your future life *to you*, which requires there to be a "you". To impute "youness" to a blastocyst requires one to assume what it is that Marquis wishes to prove. It's pure circular reasoning.

This is not from your main post, but from comments. It simply restates your error.

Marquis' argument is that a fetus is in the category "having a valuable future like ours". Once a being is in that category, it is wrong to kill it.

>The problem for Marquis is not that a blastocyst doesn't have a brain. Well, that's a problem too, but it's not his main problem. His main problem is that in order to have the blastocyst have a future-of-value but sperm+egg not, Marquis has to assume the thing which he wishes to prove, namely, that conception is the event which produces the thing which has a future-of-value. That makes his reasoning circular.

A sperm fertilizing and egg is the first developmental step of an identifiable human being. Once you can identify a subject who can suffer the loss of a future of value like ours, that subject is now in the category of "having a valuable future like ours." Hence it is wrong to kill it. See paragraph 64.

Marquis anticipated and disproved your arugments 28 years before you made them.

Publius said...

Meme Phantasmagoria

@Ron:
>Idea-ism already has an obligation to maximize, but there is a lot more to maximizing the bio-diversity of memes than simply creating the maximum number of human brains. Human brains are a necessary but not sufficient ingredient for creating memes. North Korean prison camps, for example, are chock-full of human brains but you're not going to see a lot of art or creative writing or scientific breakthroughs coming out of them.

@Ron 2015:
>Idea-ism has a measure for the quality of a meme: any meme that advances the interests of memes (in general) is better than any meme that doesn't. So, for example, the "book" meme is better than the "war" meme. (Note that this criterion doesn't define a total order.)

How do "art or creative writing or scientific breakthroughs" advance the interest of memes?

@Luke:
>I'm just trying to get you to flesh out idea-ism more, here. The devil is always in the details.

Perhaps he should start with justifying that memes even exist, and don't belong in the ontological category of "myth," which is a subset of "fiction," which is is a subset of "mental construct." See also Memes: Universal Acid or Better Mouse Trap?

Don Geddis said...

@Publius: "A sperm fertilizing and egg is the first developmental step of an identifiable human being." Hardly. Once you give me a separate haploid egg and haploid sperm cell, I can already "identify" the future human being, just as easily. Fertilization is of course an ordinary part of the whole developmental process (along with thousands of other future steps), but it isn't at all a necessary part of our knowledge of future identity.

"justifying that memes even exist" That's a foolish article. You ought to read that section of Dawkin's book first. His whole point is that evolution is a process that depends on certain features, which is potentially more generally than just biological DNA. Memes are (possibly) another example. Memes may or may not follow the rules of DNA evolution, that's true. But ideas obviously "exist", and can be transmitted between human via natural language communication, etc. All the silly talk about mirror neurons and looking for some physical correlate of an idea is completely irrelevant. Actually, his real mistake is this one: "it is just an analogy". That misses the whole point, which is that evolution describes a system evolving according to some specific rules. The sentence before is actually correct: "Memetics is just another way of saying that “culture evolves according to analogous laws to genetic natural selection.”" That's all that was ever claimed in the first place!

It's similar to starting with addition on natural numbers, then extending it to rationals, then reals, then complex numbers. It isn't a criticism to say that addition on complex numbers is "just an analogy" to addition on natural numbers. The point, instead, is the reverse: that "addition" is a more general concept than just the original natural number version. And evolution is more general than just biological DNA.

Luke said...

If we're going to examine memes, I suggest revisiting:

> Luke: @Ron, are you aware of any active research being done on memes? There was a Journal of Memetics, but it ceased publishing in 2005. The publisher, Bruce Edmonds wrote the following article (RonH notified me of this):
>
> The revealed poverty of the gene-meme analogy
> why memetics per se has failed to produce substantive results

> Ron: Nope. I am a lone voice in the wilderness at the moment. But I have faith ;-)
>
> In all seriousness I am only moderately and temporarily discouraged by the demise of the Journal of Memetics. The same thing happened to AI in the late 80's and look where we are now. Figuring out how brains work is really, really hard -- quite possibly the hardest problem in the universe -- and so it is not too surprising that we haven't made a lot of progress in a mere 40 years since Dawkins coined the term "meme".

So, memetics already has a major problem: it doesn't appear to be useful scientifically, especially in comparison to other ways of describing human thinking in society.


For those who would claim that memetics is analogous to evolution by natural selection, I would ask what role human agency has, if any, in that process. Some forms of social constructivism would obliterate human agency, and arguably naturalism does as well. I reference Bruce Waller's Against Moral Responsibility, where he argues that on naturalism (which he accepts), humans cannot deserve any special praise or blame. You can still have an "as-if" agency; we can explore the difference by considering a simple scenario I recently presented Dr. Parsons over on SO:

>> Suppose that a research faculty member instructs one of her grad students to run some experiments. This grad student is of middling abilities and needs to be micromanaged. The grad student follows instructions to the letter and delivers the experiment results to his boss. She then makes a discovery which results in a Nobel Prize. Did she deserve most if not all of the credit, and her middling grad student little to none? If there is the kind of agency which TEPS would deny, yes. Otherwise, the faculty member initiated no more causal chains than her grad student and deserves no additional credit. (N.B. I doubt many grad students who contribute to Nobel Prize-winning research are well-described by this hypothetical.)

On naturalism, the professor doesn't "deserve" more credit in some inherent sense. Instead, society just chooses to apportion rewards and punishments in a way that advances its interests. The individual plays second fiddle in this scheme. Indeed, the individual isn't really more "active" than the selfish gene: both are 100% the products of their environments and 100% of their actions are the result not of themselves, but of what was done to them. Maybe this isn't a problem and maybe I've made a logic error, but I think the consequences of ideas ought to be explored, in case they match reality badly (thereby distorting it) when applied in domains where they don't belong.

Ron said...

@Publius:

> Marquis' argument does not depend on the fetus being able to value its own life.

Yes it does. It does not depend on the fetus *actually valuing* its own life, but it does depend on the fetus being *able* to value its own life.

The suicidally depressed person thinks their life has no value, but they are simply mistaken because they are suffering from depression. If they are given treatment they can be made to see that they are mistaken. So a depressed person is *able* to value their life (because they have a brain) even though they may temporarily not value it.

By way of contrast, a terminally ill person, or someone suffering from untreatable depression, may decide that their life has no value, or that the net future value of their life is less than the negative value of the pain they will have to endure in order to live it. In this case, they could very well not be mistaken, and hence euthanasia is moral (with proper safeguards to insure that the person requesting it is not suffering from treatable depression).

But a fetus cannot be mistaken about their assessment of the value of their life. A fetus *has* no assessment of the future value of its life, and cannot have such an assessment because it has no brain.

If you want to argue differently, then you have to either 1) show how a blastocyst can value its future while a sperm+egg cannot, or 2) concede that contraception is immoral.

@Luke:

> For those who would claim that memetics is analogous to evolution by natural selection, I would ask what role human agency has, if any, in that process.

It has exactly the same role that it has in DNA-based evolution. Humans have been producing artificial life forms through non-natural selection for millennia. Chihuahuas, cauliflower and beef cattle would not exist but for our meddling.

Ron said...

@Luke:

> surely you can say something more than the above, when it comes to idea-ism's obligation to maximize and how that interfaces with abortion?

Indeed I can. I could probably write a whole book about it (and some day maybe I will).

But how idea-ism "interfaces" with abortion is at root very simple: someone has to assess whether the birth of another child will be a net win or loss to the memetic ecosystem. Advancing the interests of memes is not easy. It requires not just human brains (for now) but also the right environment for those brains to develop and thrive. If it is judged that the right environment cannot be provided, then aborting a fetus before it has a chance to grow a brain can be the right thing to do.

But even better would be to widely promote the use of birth control so that this very difficult decision never needs to be made by anyone. There should be condoms available for free and without stigma in every school.

Luke said...

@Ron:

> > For those who would claim that memetics is analogous to evolution by natural selection, I would ask what role human agency has, if any, in that process.

> It has exactly the same role that it has in DNA-based evolution. Humans have been producing artificial life forms through non-natural selection for millennia. Chihuahuas, cauliflower and beef cattle would not exist but for our meddling.

Natural selection is not guided by any purpose; artificial selection is. That is a giant difference.

> But how idea-ism "interfaces" with abortion is at root very simple: someone has to assess whether the birth of another child will be a net win or loss to the memetic ecosystem. Advancing the interests of memes is not easy. It requires not just human brains (for now) but also the right environment for those brains to develop and thrive. If it is judged that the right environment cannot be provided, then aborting a fetus before it has a chance to grow a brain can be the right thing to do.

That much is obvious, but without seeing how it would actually play out, it's still incredibly abstract.

> But even better would be to widely promote the use of birth control so that this very difficult decision never needs to be made by anyone. There should be condoms available for free and without stigma in every school.

I might be in favor of that if sociological/​psychological research on what easy sex can do to people were also taught. I'm thinking of Chap Clark's Hurt: Inside the World of Today's Teenagers & Hurt 2.0—although since Clark is a Christian and Christians have stigma when it comes to writing such books, it'd be good to get a Certified Secular™ counterpart. You should want this too, if you're interested in memes. People with more psychological issues tend not to generate as many memes.