Why The Gods Are Trolling You

Date published: Wed, 13 Jun 2012 16:00:00 -0700.

(I actually intended to make this argument after I had presented a certain construction of (meta-)meta-ethics, and after I had more strongly motivated the locality axiom, but Things Changed1. You may have to fill in the gaps yourself for now.)

Let’s start with a simple definition - what’s trolling? Trolling, like crackpottery, is arguing for positions that are not merely motivated by truth-seeking2. The major difference, however, is that a crackpot actually believes what they are saying, they just use an interestingness prior to select their beliefs. A troll is intentionally adjusting their beliefs for the specific argument, either in content (“lol bible says kill the gays”) or strength (“I feel very strongly about this definition!”).

Of course, all the good philosophers3, mystics4 and hackers5 have, at some point at least, been engaged in trolling, but What Would Jesus Do isn’t enough of an argument for us.

So let’s do this from first principles.

The simplest is this:

  • Moral action must always be possible.

There cannot be a situation in which every possible action an agent can take is wrong. The set of available moral actions is never empty.

This should be self-evident, just like the axiom of identity. Similar requirements exist elsewhere. For example, we have this axiom when it comes to rational beliefs:

  • Rational beliefs must always make a difference in anticipation.

Beliefs are about predictions and anticipation, but if someone who holds an irrational belief makes exactly the same predictions in all circumstances as someone who holds the rational belief, then, well, you’re doing it wrong.

Either the irrationalist is actually right and just uses a different language, or the rationalist is wrong and likely arguing about a meaningless question.

Similarly, morality is about actions. In rationality, you are presented with a set of possible beliefs and choose certain ones. In morality, you choose actions6 in the same way.

So just as rationality requires that there is always a difference in anticipation and that the set of anticipated events is never empty, so morality requires a difference in action and that the set of available moral actions is never empty.

This does not, of course, require that those actions be easy, pleasant, certain or otherwise nice. Sophie’s Choice is still allowed, but not Calvinism.

This is already enough for us, it turns out. One meta-level down, we can now formulate the locality axiom:

  • Only local information can be relevant for moral action.

If this were not the case, and moral decisions depended on global information, like say the entire history of the Andromeda Galaxy, then anyone who doesn’t have this information - especially any computationally or physically limited agent like us - could not, in principle, do the right thing.

Thus, morality is always local.7

But what does this imply?

Remember the Cartesian Demon, a being which is vastly more powerful than you and misleads you about the content of reality. Figuring out whether such a being exists or not is not possible with your computational resources. Thus, its existence is global information, not local - it cannot be morally relevant!

Dealing with an angel or a demon can’t make a moral difference for you because you couldn’t (in the general case) tell them apart in the first place. Whether you are being trolled or not is therefore morally irrelevant.

So it’s clear that being trolled is morally neutral, but what about actively trolling someone?

Well, that depends on your intentions8!

For one thing, it is not possible for your actions to ever screw over another agent in the moral sense. (It might still suck to be them, though.) However, you also can’t be responsible for consequences you couldn’t locally have predicted, or else you might unknowingly bring damnation upon a Cartesian Stalker that chose to kill itself should you ever eat chocolate ice cream, a clear violation of locality.

So because others don’t have an obligation to hear any particular version of the truth, you can’t, in general, have an obligation to speak it either.

This doesn’t get you out of jail for free, but it does, quite explicitly, allow trolling for the good9 of the one being trolled.

Which is why the gods are trolling you.

  1. Particularly, I saw Grognor’s tweet:

    Rationalize that trolling is morally neutral and can be done responsibly so you can keep doing it without feeling guilty #lifehacks #muflhax

    And I’m appalled by that suggestion! I’m not rationalizing! I have a complex meta-ethical set of axioms that has morally-neutral trolling as a derivable theorem!

    I didn’t start out with the conclusion here, I did proper meta-ethics and discovered it! I’m not that crazy.

  2. Or rather, “believing without preferences”, merely following the axioms of probability theory without any utility function. Not even Roombas do that.

  3. Schopenhauer even wrote a book about it.

  4. The Lord has said: “These people praise me with their words, but they never really think about me. They worship me by repeating rules made up by humans. So once again I will do things that shock and amaze them, and I will destroy the wisdom of those who claim to know and understand.” (Isaiah 29)

  5. I don’t think I need to cite examples.

  6. Note, of course, that deliberately believing something is an action. Beliefs are not exempt from optimization. Don’t be a rock.

  7. See Non-Local Metaethics why this immediately rules out many meta-ethical theories like Average Utilitarianism.

  8. I’d like to point out that locality automatically introduces the Doctrine of Double Effect.

  9. Act only for the good. Leave the “greater” to God.

by Oligopsony on Thu, 14 Jun 2012 05:36:24 -0700

You seem to be appealing to intuition, or to axioms. But suppose we take some preference ordering over global outcomes for granted. Then don't these principles just naturally arise as part of intersubjective enforcement policies? That seems considerably more elegant. But you're a smart cookie and yet (as best I can tell) a non-naturalist deontologist anyway, so perhaps I'll have to wait for the Grand Ethical Theory.

by muflax on Thu, 14 Jun 2012 06:09:05 -0700

I disagree about the elegance - I find minimal sets of plausible axioms more elegant, but I'm not going to argue much either way.

I don't understand your point about the preference ordering. Is it important which specific one we pick or do you mean it would apply to any? And where does it come from?

Or do you mean that it would arise as a natural Schelling point, basically "be conservative in what you send, liberal in what you accept" ? That seems to depend strongly on the agents you expect to communicate with, would not absolve you from harm if you are significantly more powerful / better informed than your partner and would not free you from moral luck. (So at least, it would be a much weaker conclusion.)

About deontology, see the second half (in parentheses) of my comment to wallowinmaya.

(And I honestly can't tell what people mean when they talk about naturalism vs. non-naturalism except for historical affiliations. I mean, an AI-dominated computationalist world looks pretty much exactly like polytheism, but one is natural and the other isn't? Is a Platonist a naturalist or not? Is Tegmark? I don't know.)

by Will Newsome on Thu, 14 Jun 2012 12:53:05 -0700

"And I honestly can't tell what people mean when they talk about naturalism vs. non-naturalism except for historical affiliations."

I haven't figured this out yet either.

Also, good post, though I'm probably saying that mostly because I think the gods actually do tend to troll quite a bit. (See Hansen's book on "The Trickster and the Paranormal", I guess.)

by Will Newsome on Thu, 14 Jun 2012 12:53:40 -0700

"He trolls us because He loves us."

by Mitchell Porter on Fri, 15 Jun 2012 17:02:27 -0700

Naturalism just means materialism.

by Oligopsony on Sat, 16 Jun 2012 13:53:42 -0700

Hmm. I had been taking too many cues from Will Newsome and skimped on explaining what I actually meant by things. (Not that Will is wrong to do so, necessarily. It's part of his charm.) 

Less axioms are more elegant (that's axiomatic.) But the content of these axioms can be found as theorems. Suppose some preference ordering over outcomes, whose content isn't particularly important (for this question.) Cet par you want other people to operate according to this preference order (so it can be realized), so you incentivize them to act on it. Doing so successfully doesn't imply striking out at them when their options are absolutely bad but when they choose a relatively worse option. Locality, bam. 

(I'm not sure that, if we can realize that the gods are trolling us, that the gods would actually be trolling us in the relevant sense. Which doesn't mean that there would be no point in coming to that sort of aufhebung or w/e.)

by Oligopsony on Sat, 16 Jun 2012 13:55:20 -0700

(And oops for mislabeling you.)

by muflax on Sat, 16 Jun 2012 14:02:20 -0700

Makes sense now. Neat. Still feels a bit different, but can't quite say why, have to think about it.

by Oligopsony on Sun, 17 Jun 2012 13:41:14 -0700

Scattered additional thoughts on the above:

1) This naturally produces the impermissible/permissible/supererogatory/obligatory distinction, although relativized to judging and judged agents, rather than inhering in actions. (This probably accords better with our object-level intuitions about who's obligated to do what in various situations than the traditional view, even if the mechanics don't.) This also solves demandingness if demandingness is in fact a problem to be solved. (If you respond well to guilt, well, demandingness is a feature not a bug.) 

2) Virtue-theoretic and intentionality-focused distinctions and so on are grounded in the fact that humans just aren't unitary agents. (On this blog I'm tempted to type out: DUH.) We're all constantly acting from bundles of motivations, some good to encourage, some good to discourage. Davos' fingers and all that.

3) Locality doesn't really get you to DDE in its strong form, which would justify the standard "inconsistent" answer in the Trolley Problem, because an anticipated effect is an anticipated effect is an anticipated effect. If you have differing attitudes towards the effects themselves that can say evaluable things about you, but the distinction between (if what you really want is A) doing B to accomplish A and doing A in a way that produces B is just obvious Jesuitical bullshit (and I like Jesuits too!)

4) On unrelated note, Hansen is absolutely fascinating. 

by muflax on Sun, 17 Jun 2012 19:17:26 -0700

1) Even neater! (I don't like guilt-trips, but I like Lawful Neutral. Demandingness is definitely a feature, but "ends don't justify the means" is even more of a feature. Principles, bitches!)

I just noticed that I have been kinda using this as my sanity check for a while. "Would God have mercy on me if I acted like this?"

3) Locality alone is probably not sufficient for DDE, yeah. (Although I'm still confused about anticipations, especially in the multiverse.)

And agreed that DDE in it's strictest form is hard to actually apply (if it's at all possible). It's more of a useful heuristic than a fundamental axiom, and the distinction between "sorry, didn't want to harm you, but couldn't avoid it either" and "die for my ship!" is clearly valuable as a psychological safety mechanism, but not necessarily a fundamental feature of reality.

(Did consider becoming a Jesuit once, actually. Couldn't get past them not using The One True Interpretation Which I Just Happen To Believe.)

4) Noted.

by Yvain on Mon, 18 Jun 2012 15:21:28 -0700

Wait a second. All this non-locality stuff  seems to ignore the relatively common-sense possibility of judging morality by intentions rather than results. And by intentions, I mean "predictions of results according to your current beliefs."

For example, if you see a child drowning in a river, and you rescue him, and then the child grows up to be Hitler, your action is still moral (I guess now we're talking about locality in time as well as space). It had bad results, but if you accept that for the average kid them living is better than them dying, and you had no information that this kid would be especially evil at the time, then you were justified in predicting your action would increase utility, and so you acted morally in performing the action.

This distinction saves us from worrying about locality. If you are an average utilitarian and you believe that the average utility of the universe is 10, then creating a being of utility 11 is moral. Given your belief, you should expect it to also be good, but if you were mistaken and the average utility of the universe is 20 and your action was not good, it is still moral.

by muflax on Tue, 19 Jun 2012 01:59:10 -0700

I think we use the term "moral" slightly differently, so let me make it explicit.

When it comes to beliefs and truth, you can distinguish between "the best you can believe, given the evidence" and "what is actually true". The first is called "rational", the second "true" belief. They can easily be different things.

Similarly, for morality, there is "the best you can do, given the evidence" and "what is actually good". We might also add "what is most advantageous to influence you to do".

I'd only call what is actually good "moral", and the last thing "praiseworthy", but my point is that the first *can't* exist as a separate category. The best I can do and what is actually good *must* be identical. So I'm fundamentally rejecting all forms of moral luck.

According to avg util, there can be moral luck. There could be a button that actually halves the utility in our universe, but which all evidence has led me to believe that it summons ponies and ice cream for everyone. Based on a perfectly rational belief, I press it and screw over everyone.

Even given the best data, and doing nothing wrong on my side, I did what was actually wrong, even though it may still have been praiseworthy to do it. Avg util clearly has moral luck because it is non-local.

And because what matters is not what, say, a society should encourage people to do, or what the best realistic outcome is, but what is *actually good*, moral luck sucks real bad. So much so that you might as well become a nihilist if your metaethics have it, because reality will demand you live up to standards it doesn't even reveal to you and which are literally impossible for you to reach.

(Also, you use intentions for what I'd call anticipations, and not for the obvious thing, the telos, the what-I-want-to-bring-about, not the what-I-expect. (Of course, my use of the term is always the obvious one.))

by Yvain on Tue, 19 Jun 2012 04:23:06 -0700

I'm still not sure where you're coming from, so let's get into more detail.

What you're calling "morality" and "praiseworthiness" are what I would call "goodness" and "morality" respectively, so let's throw out "morality" and use "goodness" and "praiseworthiness" to avoid equivocation.

So "is there moral luck?" can be interpreted as two different questions: is there praiseworthiness-luck, and is there goodness-luck?

The answer the first question, I think we both agree, is no. If you honestly believed you were doing the right thing, you are praiseworthy, whether or not you lucked out into being correct. This is important because it allows us to reward and punish people in a sane way, ie we wouldn't imprison the man who rescued the drowning child for war crimes, even though his actions did technically cause World War II.

That brings us to the second question: is there goodness-luck?

You seem to believe it's obvious that there isn't, but that doesn't seem obvious to me at all. The "save Hitler from drowning" example could be goodness-luck in any form of consequentialism; same with the classic example of the would-be murderer who ends up not committing murder because his victim was wearing a hidden bulletproof vest.

"I'd only call what is actually good "moral", and the last thing
"praiseworthy", but my point is that the first *can't* exist as a
separate category. The best I can do and what is actually good *must* be
identical. So I'm fundamentally rejecting all forms of moral luck."

Why must these be identical? I agree with your comparison of "good" and "praiseworthy" to "true" and "rational", but doesn't that mean that declaring praiseworthy = good mean you're insisting the map = the territory?

"And because what matters is not what, say, a society should encourage
people to do, or what the best realistic outcome is, but what is
*actually good*, moral luck sucks real bad."

What matters to whom? I would have said exactly the opposite - that the important question of moral philosophy is what a society should encourage people to do, since it seems closely related to the question "What should I encourage myself to do?". Once we know that, further worrying about whether it is in fact good or will turn out to be bad by moral luck seems about as profitable as saying "Okay, all the *evidence* points to evolution, but what if that was just planted there by Satan and creationism's true all along by epistemic bad luck?"

by muflax on Tue, 19 Jun 2012 08:00:35 -0700

I tentatively agree that there is no praiseworthiness-luck, but I'm not sure.

Consider the case of a dangerous revolutionary who wanted a reform for basically good reasons, but who due to a lack of understanding is likely to cause much more harm than good. (Plenty of historical examples to choose from.)

It might make perfect sense to condemn and even persecute them, even though technically, they may not have done anything bad, or really, could've have known better. It's not their fault that they were a bit naive or misinformed, but we still wouldn't want to praise them, at least in retrospect.

But overall, I agree that "I genuinely did what I thought was good" is not worthy of condemnation, should often be encouraged, and failing doesn't make a person evil, just misguided.

Ok, goodness-luck. First, let me point out that you have exactly understood our disagreement. :)

I'm not sure I can, here or in a future post, make a decent case against goodness-luck that doesn't rest on axioms or principles that would themselves be less philosophically controversial than the position I'm arguing for. (Though not among The Right Philosophers, who just all happen to agree with me.)

I may make a better attempt later, but a few points nonetheless.

For one, yes, this is a major problem for consequentialism. I genuinely don't understand why smart consequentialists don't regularly despair about it though. This is a huge fucking problem! If you do the slightest wrong thing - get a calculation wrong somewhere, underestimate a risk, just didn't think of an hypothesis - you might literally do extremely evil things on a regular basis. Heck, you might be doing it anyway even *if* you do everything right! On a purely emotional level, *this should cause people to freak out*.

(However, not all consequentialisms have this problem, and I don't inherently reject them, but I think consequentialism in general is on too low a meta-level to do useful meta-ethics on, and too hard to use to decide policies with. This doesn't invalidate it, but something like virtue ethics is both meta-ethically and pragmatically much more useful.

It takes a *lot* of effort to get someone to understand consequentialism, and to *also* understand why pushing the fat man is still a bad idea. If you had just started out with virtue ethics, you'd have gotten much saner results right away. And similarly, using more fundamental principles, as e.g. Leibniz did, would also yield better meta-ethics, even if it might turn out to be isomorphic to some particular consequentialism.)

Another point is that with goodness-luck, you could have unreachable morality. Like, if you're born a pebblesorter, then if you do everything right, derive Bayes, VNM and all that, never make the slightest mistake, you'd still be evil. Sucks to be you.

What makes you think *any* process that generated you is so reliable that it just happened to give you the right ideas about morality? If goodness-luck exists, it seems to me to be much more plausible that we'd all have to be moral relativists who don't think there's an Actually Good Thing To Do embedded in reality somehow, and we just have a bunch of conventions and preferences, or to freak out how we can get our philosophy right to avoid the highly likely mistake of Being Utterly Wrong About Goodness.

Like, goodness is the one thing where paranoid perfectionism is completely appropriate. Settling for "the best our society can do, given what we know", to me, misses the point of The Good.

In the case of truth, this isn't a big deal. If our territory and map don't match, that's not fundamentally a bad thing. Merely knowing the truth is not a good thing by itself, and given that all maps are compressions of the territory, by the pigeonhole principle, they *have* to leave stuff out. They can't be exactly right in the general case.

This becomes problematic for deciding-what-to-do, though. If you ask a philosophy oracle, "What should I do to make the most accurate predictions, given my limited data?", then "use Bayesian induction and these techniques here" would be a perfectly sensible answer. If you ask, "What should I do to do the right thing, given that I'm a pebblesorter?", then "don't have been born lol" isn't.

It's not really that the goodness-map and the goodness-territory have to be identical. (I'm explicitly arguing that false beliefs and outright trolling, even about morality, are compatible with goodness, after all.) Rather, I'm demanding that the territory doesn't have cut-off areas you can't ever reach, but *should*.

A territory I can't ever in principle map might as well not exist. Goodness I can't in principle attain, similarly, isn't real goodness.

It is somewhat axiomatic to demand that, and not necessarily justified by other things, but to me it seems to be on the same level as the rejection of "what if there isn't a territory", but not as implausible as "what if there is goodness-magic that means I can't ever do evil" (e.g. Calvinism for the chosen).

(And it's not like map and territory *have* to be different. Phenomenology would be another example. What I call my experience of seeing red, and actually seeing red, are necessarily the same thing, regardless of whatever explanation you have why I see red, or how it works. I can't be wrong about experiencing something, and I can't be screwed out of doing the right thing, even though I can easily be wrong about mechanisms or causes.)

by Yvain on Tue, 19 Jun 2012 12:54:13 -0700

I agree with you about praiseworthiness-luck on all the actual implications, although my model of that situation has some of the concepts on different levels and in kind of different groupings so that it would still count.

"For one, yes, this is a major problem for consequentialism. I genuinely don't understand why smart consequentialists don't regularly despair about it though. This is a huge fucking problem! If you do the slightest wrong thing - get a calculation wrong somewhere, underestimate a risk, just didn't think of an hypothesis - you might literally do extremely evil things on a regular basis. Heck, you might be doing it anyway even *if* you do everything right! On a purely emotional level, *this should cause people to freak out*."

I don't understand...isn't this self-evidently the world we're living in now? If I vote for either candidate (or don't vote at all) in the next election, I may accidentally and with the best of intentions be causing World War III, which would be horrible. This fact is scary and worth freaking out over, but I don't see why the level of freakiness depends on our moral theories. We seem to both agree that the person involved is blameless, and I think we both agree that World War III would be horrible, so how does consequentialism worsen the problem or nonconsequentialism absolve you from it?

Or to put it a different way, we both have an axiological "good" in which World War III is not good, and we both have a concept of "praiseworthiness" in which voting for the candidate you think best is praiseworthy, even if they end up causing WWIII. For me, these two concepts seem sufficient for all moral discussion. You seem to be searching for something somewhere in between these two concepts, and I'm not clear what exactly you're trying to do with it.

...and now reading further I'm getting hints that our opinions are so different we probably can't have a meaningful discussion on this level; you're saying that there ought to be an honest-to-goodness objectively true (as opposed to the weaselly "objectively true" most LWers use) morality that produces universally compelling arguments, and that's what's sitting between axiologically-good and praiseworthy, aren't you? That might be too big a disagreement to resolve, but at least I'd see where you're coming from.

by muflax on Tue, 19 Jun 2012 14:52:43 -0700

Agreed about the WW3 scenario in all aspects.

Well, if you have local meta-ethics (and say virtue ethics is typically local, for example, but you don't have to drop consequentialism), then "the good thing to do" is always among your possible actions. You are never *that* wrong. You inherently *can't* be.

You might still cause WW3, though. And you might still believe it is not-good that you did that. But you'd be wrong about this belief. And you might still be condemned for it. And that may even be appropriate. (And figuring out what to praise and how to coordinate is insanely hard. I don't claim to have any deep insights there.) But you didn't actually do something evil. The universe isn't *that* screwed up.

I don't think that this is a trivial or irrelevant distinction, but it may not be very important pragmatically, sure.

And well, "universally compelling" in a sense, yes. A good agent should in all circumstances in all possible worlds be able to do the right thing. For them, being born among pebblesorters would not be an issue - they could still reason themselves into the One True Morality. (OTMs, if there are multiple optimal solutions. Seems plausible.)

But some agents might just be evil. Can't argue everyone into goodness, but good agents can always bootstrap themselves. (And yes, that's a major disagreement with, say, Eliezer's crypto-relativism.)

(Agent, as in "a certain decision algorithm", and world-independence as in "does the right thing, regardless of perceived evidence".)

by muflax on Tue, 19 Jun 2012 15:34:02 -0700

Also, I'm not sure we have to stop the discussion. I *think* I may have found another way to argue for there being a OTM, or at least to make it sound not entirely crazy that there could be. I'll try writing a post about it soon.

("That may be true" is, after all, more than halfway there already. If I can show that virtue ethicists who believe in as-objective-as-math morality aren't batshit crazy, I've done all I could hope for.)

by Bo on Fri, 22 Jun 2012 07:53:40 -0700

> Beliefs are about predictions and anticipation, but if someone who holds an irrational belief makes exactly the same predictions in all circumstances as someone who holds the rational belief, then, well, you’re doing it wrong.

Um, what about the implied invisible? Thought I guess you might have given that up after the multiverse freaked you out.

> So because others don’t have an obligation to hear any particular version of the truth, you can’t, in general, have an obligation to speak it either.

The only thing this establishes is that not telling someone the truth won't strip away their ability to act morally. It could still hurt them in other ways, or be generally vicious for some other reason. 

by Bo on Fri, 22 Jun 2012 07:56:20 -0700

 (And of course this post doesn't even try to establish that the gods are real, so the title doesn't really make sense)

by muflax on Fri, 22 Jun 2012 08:40:58 -0700

> The only thing this establishes is that not telling someone the truth
won't strip away their ability to act morally. It could still hurt them
in other ways, or be generally vicious for some other reason.

True, which is why I said it doesn't have to be a nice position, and other factors like the troll's intentions matter. I'm just arguing that, in some circumstances, trolling is morally permissible. (The more moral and intelligent you are, the larger that set of circumstances likely is.)

(And yes, I deny the implied invisible in its strongest form (and always have), i.e. things that don't ever causally or experientially affect me, not even indirectly, don't exist.

A multiverse, counterfactual worlds and so on are not part of that, necessarily, due to acausal interactions, for example. Or because a model that includes them is faster to compute, which gives me causal advantages and so on.)

by Bo on Fri, 22 Jun 2012 10:19:41 -0700

 > I'm just arguing that, in some circumstances, trolling is morally permissible.

You can't establish that something is permissible by refuting an argument for why it's not permissible, which seems to be all you did (you argued that trolling is permissible, because the argument that people have an obligation to know the truth and therefore you have an obligation to tell it to them, fails).

by muflax on Fri, 22 Jun 2012 12:05:39 -0700

Evidence against not-A is evidence for A, no?

If it's allowed for you to receive non-truth, then there exists a situation in which you receive non-truth and it's alright, so someone has sent non-truth and it's alright, so sometimes sending non-truth is alright.

Straightforward symmetry of communication.

by Bo on Sat, 23 Jun 2012 01:38:59 -0700

> Evidence against not-A is evidence for A, no?

Sure, many logical fallacies are still *evidence* for their conclusion.

> If it's allowed for you to receive non-truth, then there exists a situation in which you receive non-truth and it's alright, so someone has sent non-truth and it's alright, so sometimes sending non-truth is alright.

Alright *for the receiver*, not necessarily *for the sender*.

From the fact that an act doesn't strip away the target's ability to act morally, it in no way generally follows that the act is ever permissible. Your argument really seems to be simply logically invalid.

by Grognor on Fri, 06 Jul 2012 17:23:01 -0700

 I was trolling with that tweet.