Payload

Date published: Sun, 21 Apr 2013 16:00:00 -0700. Epistemic state: log.

Let’s see…

I did a bunch of stuff I don’t properly understand enough yet to talk about without making an ass of myself, and even though that hasn’t ever stopped me before, I don’t wanna contribute too much to People Being Wrong On The Internet if I can help it…

And all other projects won’t have anything meaningful to show off or talk about for some time to come, and “I read these books; they don’t suck” posts aren’t very interesting (unless I can at least say something about them)…

And this situation isn’t gonna change anytime soon, and this is still primarily a practice log (maybe we should call an ironic purpose a “telol”?), so even though I have a bunch of drafts, I don’t wanna just dump my crazy.txt1 here without at least occasionally balancing it with not_actually_crazy.txt.

So, uh, what am I gonna talk about the next few months? Guess I’ll have to become ultra-productive and cram in some more minor projects.

しい〜〜〜ん

Yeah ok, volume will probably just go down and I’ll try to mention cool shit and minor projects until stuff gets interesting again. (Lowered the road dial again for a while.)


Hint: someone scanned Westergaard’s Introduction to Tonal Theory and I hear it will also appear on the usual sites in a few days. Just fyi.

On a totally unrelated note, I’ve started reading Westergaard’s book and I’m genuinely impressed. It looks like a proper textbook about a sane theory with lots of examples and a clear, logical structure throughout. Haven’t had much time to work on it yet, but it looks very promising.


Some drug updates.

  • Modafinil: useless as a normal upper, mediocre as a sleep substitute, great for “fuck deadline can’t afford to sleep fuck fuck”. The effect during waking hours is not noticeably better than caffeine for me. If it weren’t so expensive, I’d run a double-blind test to compare side-effects and get a more precise idea, but so it’s just not worth it. Will likely keep a small emergency supply on hand, but otherwise I’ll stick to nicotine and caffeine.

  • Nicotine: started to smoke cigarettes again. Not a lot (i.e. <40/month), and it’s not really about the nicotine either, which is almost anti-addictive for me. Despite clear benefits and wanting to use it regularly, I keep forgetting I have large supplies of lozenges and go days (and sometimes weeks) without any for no good reason, and even when I use them, it’s never more than 4mg/day.2

    So why smoke? Because I want something destructive - a tiny, controlled source of it, yes, but to deliberately do something of no use - no, not enough! - to take an action fully aware that it brings nothing but harm to myself, is to say, no, I cannot be satisfied; to say to optimization itself, nothing you can do will ever make me happy; this is my vote of discord, my dissatisfaction with order - this one is for decay.


I made a Twitter account for drunken confessions. If I can’t say them to the person I mean them for, I can at least say them in general just to have them said, to lower the barrier - a practice round instead of just sadness. Maybe it’s gone tomorrow, maybe not. Don’t read them. I made it for you. Take with a grain of salt, placed on the back of your hand and a slice of lemon; count your shots.


I feel like many arguments could be more easily resolved if people more routinely asked themselves the First Rule of Debugging, i.e. “What are you trying to do with that?”.

I’ve failed to state that enough in the past (and I’ve added “state what this is trying to accomplish” to my list of rules to observe in future writing), so here’s an example for practice.

In Scott’s wrestling with virtue ethics 3, I was reminded how I had always assumed (if not explicitly, then implicitly in how I write) that my specific problems with ethics generalize and then how I was somewhat upset how someone could have the nerve to develop ideas that might work for their problems, maybe, but not mine!, what sacrilege.

In particular, I have no clue what I want (I feel like I may have some preferences in forced decisions (i.e. almost all of them), but that’s far from wanting things for their own sake), so theories about “getting what I want” are of no use to me (regardless of their validity), but I know “the kind of person” I don’t want to be (“apophatic virtue ethics”, you say?), but at the same time (and for good reasons I won’t elaborate on), I don’t trust my own ideas of what “good” is and that many common heuristics - trust your instincts, use empathy, follow your heart - are horrible advice for me because I’m at least potentially a Bad Person(tm) and I will fuck people over if you’re so stupid to make me run on instinct, so I don’t just need (meta-)ethics to resolve conflicts for me, but to even provide (some of?) my values in the first place.

So any theory that requires me to load it with my values first is utterly useless to me because it doesn’t solve any of my actual problems. For example, my motivation for making locality a requirement for any meta-ethical theory is that I don’t want to be held responsible - including by abstract principles - for things I have no possible way of doing right. Or as Kant is commonly (and uncharacteristically) summarized, “ought implies can”. I don’t know what “the Good” is, in terms of a specific outcome or kind of world. There are narratives that shape what I think of as a “good world”, but these are clearly contingent and very non-mainstream (currently, anyway4). A defining feature of any concept is that there are some things it isn’t, some negative examples. But if “the Good” is just whatever these narratives have led me to think of as “Good”, then there is no way for me to be wrong about it.5 The label “Good” is then just an arbitrary placeholder. I don’t know if “Good” outside this usage has any meaning either, but if so, I want to at least cover my ass. Thus, Moral Realism That Doesn’t Do Anything.

Interestingly, I don’t seem to apply this same thinking to the past. Even though Bad Shit happened6, I’m completely unwilling to let it go and still consider Things I Literally Have No Longer Any Causal Control Over7 just as relevant and so a single negative example can devastate me and keep on devastating me.8 I wonder if I should stop doing that…

  1. Actual content of ~/drafts, my crazy.txt:

    • start of a manifesto “Against Compassion”
    • “I didn’t choose to be a wolf”, a meditation on fursonae
    • sketches about “could zombies have saved the Nazi war effort?” until I got bored doing calculations (includes mostly worked-out Nazi Zombie campaign setting, incl. sketch of a metaphysics to make it reasonably coherent)
    • a love poem to cheesecake
    • “why everything sucks and will never ever get better”, a reading of some of Luther’s more obscure writing (on hold until I figure out if everything sucks and will never ever get better or not)
    • an incomplete sketch of “Against Imagination”, which keeps on metastasizing as I add more and more anger to it, but it never really goes anywhere constructive and I wonder if I’m not just rewriting the Futurist Manifesto anyway
    • “I’m the worst person to talk about politics, but I’m drunk and I have a pile of manifestos, try stoppin’ me”, an interdisciplinary attempt of political exploration
    • several sketches and attempts of a new critical method I call Passive-Aggressive Deconstruction

    I think it’s obvious why those drafts never go anywhere.

  2. I’m deliberately ignoring e-cigarettes even though they’re cheaper because I don’t want to accidentally establish a “suck on cigarettes” habit that might transfer to real cigarettes so I can always maintain a behavioral separation between “nicotine so my brain works” and “smoking for emotional reasons”, despite them sharing some drug effects. The nicotine in cigarettes is still welcome and I substitute accordingly, but it’s a bit of a pity I have to smoke strong brands just to remind me of her smell.

    hot ashes for trees

  3. In particular this post how virtue ethics is not useful in practice, and this (highly excerpted) exchange with Vladimir:

    [Vladimir:] If this abstract theory [e.g. utilitarianism] provides answers for the extreme and controversial cases, then it should provide answers for everyday common cases as well. But here we see that these abstract theories are of little use, often providing no useful answer or plainly absurd answers, and requiring tortured rationalizations and special pleading just to get them to clear the bar of ordinary common sense.

    […]

    If we actually do start from scratch in our study of ethics and make sure to stick to the reality of what human beings are, not to metaphysical pies in the sky and sophistries useful only for signaling and lawyering, we will end up — or at least have to start with — something resembling virtue ethics.

    [Scott:] Aside from the things I addressed in my new post, we seem to disagree a lot on how bad utilitarian and deontological ethics work. As far as I can tell they work about as well as Newtonian physics – they get the right result in the overwhelming majority of cases, but break down in certain weird edge cases where in fact they might still be salvageable.

    was the thing that made me throw up my hands and say in a moment of amused desperation, surely you can’t be asserting this! I don’t have an issue with saying that virtue ethics isn’t really a theory of anything but more a label of a kind of discourse, a way of doing morality instead of thinking about morality, and that it is, to use Skinner’s terminology, still mostly prescientific. Fair enough.

    But to say that utilitarianism (or even better, deontology! The simplest, most straightforward example of deontological reasoning - don’t deceive - is highly controversial and virtually nobody agrees with it!) works most of the time is.. well, Scott is, to use Moldbug’s amusing phrase, “not a blithering idiot”, so I can’t just say, “clearly he just doesn’t know what he’s talking about”. There must a disagreement not about results (because Scott is neither ignorant nor in denial), but about the problem. With a disparity this huge, we can’t be looking at the same thing.

    One might be tempted to say, “this just proves that some people think inherently like virtue ethicists and some like utilitarians”, but that’s just throwing away all possibility of rational discourse forever, and most importantly, this isn’t even the case. (Even if this were the case, “get out of your weird bubble” would likely be a more appropriate response, but I’m not convinced this even applies to Scott, unlike say Kant.)

    If I were more the person I want to be, I would now be able to write an eloquent reply, but alas, the only answer I am capable of is a stare of incredulity (which I hate) or an incoherent angry rant (which I love). Wanting not to be abusive, I thus experimented with alternatives and came up with Passive-Aggressive Deconstruction, which is probably still trollish, but maybe not hostile. (Or at least amusingly so.)

    Essentially, and this is why I haven’t written more than sketches yet because this is still a lot of work, I just wanted to highlight by mere quotation how Scott himself uses virtue-ethical reasoning frequently on his own blog, and how then, tumblr-style, I could just add his comment pasted on a fitting useless_utilitarian.jpg after each quote and ask, “Why don’t I see a calculation here?” or “Why do you think this is a valid form of argument?”, so that after a sufficiently large number of instances of this, it would at least be more than an assertion of “I can’t believe you’re saying what I think you’re saying, but I don’t see what else you possible could mean to say”.

    But eh, now that I’ve outlined the general approach, I can’t be bothered to do it concretely. (<3)

  4. I don’t know of any way to discuss these things without either moving to a very different audience (which I have no interest in) or attempting an act of bridging a moral divide that is akin to making a lion speak. Regardless - and I don’t know if that is intentional or not - I might eventually have to get there if I ever want to go meta and discuss important figures in Higher Criticism directly and not just their ideas how to read the NT. But I’ve failed to tell Lutherans that they don’t understand Luther, so I’m not optimistic about that ever succeeding.

  5. To be a bit more accurate, the problem is how the concept is learned and it basically separates moral subjectivists from moral relativists. In the subjectivist view, “Good” is the extension of whatever collection of preferences I happen to hold, while moral relativists conceive of some distributed concept shared by a community, like a language.

    So for example and by analogy, there is a wrong way to speak French (even though there is no Standard Written Into The World Itself that tells us a priori what French “is” and even though boundaries might be fuzzy) and so it makes sense to say someone can learn French (because we can provide positive and negative examples). But under the subjectivist view, that is not the case because only I - or rather, the preferences in my skull - define the concept. As such, I can teach what “Good” means (i.e. others can be wrong about it), but I can’t learn it because I already, by construction, embody it. (This ethical egoism may, as a matter of fact, include the value of “altruism”, of course - values aren’t justifications.)

  6. Look, Internet, I’m sorry but we aren’t that close.

  7. Well yes, Molinists can change the past, but then they have to negotiate with God and that doesn’t sound any easier, if you ask me…

  8. As Baumeister famously argues, “bad is stronger than good”.

by gwern on Mon, 22 Apr 2013 13:17:10 -0700

Smoking seems like an expensive way to hurt yourself. Couldn't you do something like cut yourself instead? Even if you had to buy a razor, it'd probably pay for itself within the month.

by komponisto on Mon, 22 Apr 2013 17:15:31 -0700

"Hint: someone scanned Westergaard’s Introduction to Tonal Theory and I hear it will also appear on libgen in a few days. Just fyi."

Muflax, you have officially become a saint. No need to ever do anything again; merely spreading the rumor of this miracle was enough.

(Admittedly, it does pose certain problems for me. I was planning to write up a long sequence of blog posts on Westergaardian theory next year or thereabouts, and that was going to be the Only Exposition On The Internet. But now that the Holy Scripture itself is freely available, the importance of priests like me is diminished. I guess that means I'll have to be content with explaining my own further development of the theory, i.e. starting a new radical sect of the faith.)

by muflax on Mon, 22 Apr 2013 17:42:12 -0700

Well, I for one am very interested in your improvements to his theory and some general further exposition. I'm completely new to music theory (or doing anything with music, really), so even though Westergaard assumes very little theoretical knowledge (bless his heart!), I'm still occasionally confused by some of the underlying decisions (like the seemingly crazy key-based notation instead of a "one line, one pitch" system).

While you're here, I also wonder how useful Westergaard / Schenker's analysis is for pop / rock. I don't really listen to (or intend to play) a lot of classical music (for now), so when I began teaching myself the guitar, I looked for some useful explanations of how the ubiquitous "chord progressions" work, and couldn't find anything. Westergaard's approach is the only thing that's not "magic Roman numbers!", so despite some separation between the traditions, can I expect to eventually understand how, say, a punk song works, using the same tools? Like, are there additional components that one would have to reduce/analyze first?

by komponisto on Mon, 22 Apr 2013 19:09:45 -0700

The key-based (or, let's rather say, diatonically-based) notation is actually a great feature, to the extent we hear pitches in terms of diatonic collections (which, according to Westergaardian theory, we do). This is the same advantage the letter-naming convention for pitch-classes (C, C#/Db, D, D#/Eb, E...) has over the use of integers mod 12 (0, 1, 2, 3, 4...). (Unfortunately, there's a whole gigantic theoretical tradition that claims this feature becomes a bug in the twentieth century; but you can ignore this universal conventional wisdom, because it's wrong.)

Ultimately, Westergaard's theory aspires to describe how a musical mind organizes the sounds of a piece (what music theory in fact is), not to catalogue the compositional conventions of some particular musical tradition (what most things claiming to be "music theory" are, although they do even this poorly). So I see no limits on its scope in terms of repertory, at least if we stay within the same basic pitch system. If anything, popular music is typically less complex than art music (I refuse to use the awful term "classical"), so you might find the analytical process easier.

Basically, if you want to understand music at the level of depth required to think creatively, Westergaard is the only game in town. (The only other option is tacit knowledge.)

by muflax on Mon, 22 Apr 2013 19:43:19 -0700

Alright then, that sounds promising.

(And yeah, "classical" is kinda awkward, I just don't have a good alternative for "those people who don't really have band-structure". "Art music" seems just as loaded. Well, just gotta be more precise then and stop lumping all those people together into one impractical category.)

by Oligopsony on Mon, 22 Apr 2013 20:49:50 -0700

start of a manifesto “Against Compassion”
“I didn’t choose to be a wolf”, a meditation on fursonae
sketches about “could zombies have saved the Nazi war effort?” until I got bored doing calculations (includes mostly worked-out Nazi Zombie campaign setting, incl. sketch of a metaphysics to make it reasonably coherent)
You wrote drafts on these and don't intend to ever let them see the light of day? Fuck you.

by Oligopsony on Mon, 22 Apr 2013 21:02:22 -0700

Also, I'm afraid that Molinist chronomancy only works on things you don't know about. If the problem is that the event itself happened, then you can blackmail God into changing it, but as far as the experiences go you have to deal with them using means that are "temporal" in the other sense - and, for what it's worth, far more reliable. I agree that negotiating with God is hard, but that's not because we have no causal influence over Molinist!God but because we don't have a good enough model of what She wants.

Also, while we're not that close, you have my sincere sympathy, whatever it might have been. My model of you doesn't like getting pitied in this way, but my pity is involuntary and my model of you also dislikes getting pitied whether he knows it or not, so no harm in good-faith compassion-signalling here.

by muflax on Mon, 22 Apr 2013 21:07:56 -0700

(+1)

Well, the campaign setting is still progressing and I expect to publish most of that stuff *eventually* in some form, but half-way through figuring out "how would zombie resurrection have to work to be at least as coherent as Aristotelian metaphysics (within reason)?", I began to wonder how much more effective Blitzkrieg tactics would become if you didn't need (human) logistics anymore (Rommel-style tank warfare seems mostly limited by fuel supplies, but Stalingrad wouldn't happen with zombies, and if you could just walk troops through the ocean, would it be much easier to invade England?), and then I began to wonder if (assuming the small zombie quantities my setting enforces) you wouldn't just be better off using untiring soldiers in your factories instead, so I started looking up numbers about just how effective slave labor really was, and then I realized that economics is hard and why does it have to be such a huge war, I just want necromancers, dammit...

(Some of that stuff is on hold for planned more awesome stuff to embed it in. I'm just a cranktease.)

by muflax on Mon, 22 Apr 2013 21:12:43 -0700

Thanks. (I'm mostly just unsure what to *do* with pity, and dislike "lemme help you get over it"-like pity, but I appreciate the gesture.)

by muflax on Mon, 22 Apr 2013 21:17:07 -0700

Heh. I'm currently doing a price re-evaluation for nicotine and it doesn't actually seem to me that cigarettes (in the quantity I use them) are all that expensive, and the direct association here is highly desirable.

Also, "this is a token gesture against optimization" - "well, there are more optimal ways to do that" is a delightfully trollish reply.

by gwern on Tue, 23 Apr 2013 04:19:32 -0700

I knew you'd appreciate it. :)

by muflax on Wed, 24 Apr 2013 22:00:36 -0700

About Molinism, I've been thinking a bit about how you'd actually try to negotiate with God. (Even at my most craziest when I think this might actually be possible, I'm not /so/ crazy that I'd try. I've only won a round of Werewolf, like, once.)

Technically, under Molinism there's a possibility of (limited?) causal control over God, but I'm not sure how big it actually would be in a relevant sense. (I suspect this "start as ultra-loyal servants, end up running the show" attitude is why only Jesuits could've invented a theology this clever, though.).

So let's see. The motivation for Molinism to begin with is that we want to say that:

1) agents have free will of some kind (While many Molinists are libertarians, I don't see why it would commit us to it. We really just want to avoid a kind of "emergent agent" view that *denies* causal power of agents, but, say, a software/hardware Aristotelian has no problem with a deterministic world in which algorithms (etc.) still have causal roles to play. But regardless, we can just reinterpret "free will" as "free from coercion" in some naturalized way and end up largely with the same motivation, even though we might have to front-load more values to get there.)

2) God *is* the Good (We *must* consider the concepts of Good and God identical, or we wouldn't have a problem to begin with. If God decrees what "good" means, then He's in no conflict; he can just go Predetermination on our asses. If God follows some external standard, then He might very well be in an unsolvable bind, and might've easily screwed up!)

3) God has (in some sense) created the world (or maybe rather, "begun" or "allowed to exist")

4) people are not (entirely) good

Therefore, God could not make a world that isn't good, but then how can He have free-willed people running around? Presumably, railroading them isn't good, so He has to respect the will of at least those people who exist. Thus, Molinism - He only brings about those people who would do anyway whatever He wants, and the necessary knowledge for that isn't too tricky, so it's an elegant solution.

This means, you exist *if and only if* you are in accordance with the Good. This has a huge problem, then:

How the fuck do you actually *condition* on this? God *knows* all counterfactuals, so He also knows "if X exists, X will condition on existing, and do Y", which is a bit obtuse to begin with - what, "but if X does not exist, X will do Z instead"? I don't see how, even under liber-"magic free will out of nowhere"-tarianism, you can *change your actions* based on the fact that you "know" that you exist, unless you also grant that this counterfactual statement is somehow undetermined (or invalid). You might be able to "learn something new" (aka the Chalmers "totally not epiphenomenalism" gambit), but you can't actually *do* anything with that knowledge because the necessary other prong of that fork, "I would do Z if I didn't exist", is clearly absurd.

Because we can't condition our actions on our existence, we also can't extract any usable information from it. But doesn't this kill all the Molinistry? Not necessarily. We can still "know" that we're justified in considering ourselves as good (in the context that we do exist in, even though we don't know what it is in advance), and so we have moral justifications that work, *even though we can't extract any moral content whatsoever from them*. You probably have to be at least as clever as Plantinga to even appreciate a distinction this subtle.

In general, we can't take any action that would hinder The Good because then God wouldn't have made us. So for any situation where we have some causal control over the outcome, it can only be about outcomes that are equally good. This might still give us plenty of choice, as there's no reason to assume that "Good" is so narrow that it implies exactly one possible world. In particular, we might be able to "pile on" any number of neutral things without affecting the world's overall goodness.

But then we have to answer the question, "Is a counterfactual evil still evil?". Would Möngke Khan be still as evil if he had died of a heart attack before he got a chance to sack Baghdad?

a) If not, then (if you don't inherently value your own existence) you can simply precommit to do whatever the fuck you want whenever you want because if it happens to be good, God will create you, and if not, well you won't exist, but you also will not do any evil. (Which explains why Epicurus might've been a powerful mage.) But in general, this makes it possible to offer miracles *to* God - "I will do this really evil thing under nearly all circumstances, but if this rare circumstance I want obtains, I will freely do something really good" - which suggests that some genuine saints might be potentially truly evil, but only good in actuality (due to God's interference). (We now know how Siddhartha pulled off the 1-in-a-kalpa trick of unguided enlightenment.)

b) If counterfactual evil is still evil, then only someone who has a pretty good idea of what the Good is can hope to manipulate God (or will never come to exist), but we also get all the nastiness of predetermination and pseudo-reincarnation back. It also makes the idea of "prevention" questionable. But at least it suggests that only saints can attempt miracles, which tells us why Catholics aren't concerned with discernment problems. Why is spontaneous healing not also evidence of demonic intervention? Because demons don't know what good is, and could never hope to formulate "still as good, but more like what I want" ultimata for God.

Clearly more research is needed.

(A last note: if we accept panpsychism, then we can actually reject counterfactuals of freedom in general because then we can say, "well it's not the case that I correctly predict what flavor of ice cream you will take without having seen you take it, even though you are 'free' (so I know counterfactuals), but instead I have *created* a tiny version of you - maybe just the ice cream part? - and *observed* it make a free choice, and you will do the same thing later, and so I can predict it" and thus God can't predict any agent unless He first actualizes it. Which means that God *can't* know the future (of any world with people in it), and that predicting *anything* means you create people. If you abandon any idea, no matter how bad, you're just as evil as the liberal abortionist! At last, a metaphysical defense of total inerrantism!)

by Yvain on Thu, 25 Apr 2013 19:25:28 -0700

Can you explain why you're so surprised by the statement that deontology and utilitarianism work in most cases?

What I meant by that was...okay, so consider some moral decisions people make. That guy insulted me; can I beat him up? That girl is pretty; can I rape her? I promised my mother I'd call her tonight but I'm very tired; do I really have to? This charity seems effective; should I donate to it?

Regarding "beat guy up": deontology says no, initiation of violence is wrong. Utilitarianism says no, infliction of suffering is wrong without good reason. Both systems come to correct conclusion.

Regarding "rape attractive person": deontology says no, still initiation of violence/violation of personal boundaries. Utilitarianism says no, still inflicting more suffering than benefit. Both systems come to correct conclusion.

Regarding "call mother": deontology says yes, promises are inviolable. Utilitarianism says yes, keeping promises is necessary for the very useful institution of promises/trust to endure (with possible out if this would be extremely painful for you and your mother wouldn't mind much). Both systems come to correct conclusion.

Regarding "donate to charity": deontology says supererogatory and this probably isn't part of its proper domain anyway. Utilitarianism says yes. Both systems seem to be at least on the right track.

I wasn't claiming anything stronger than this to Vladimir. I'm surprised that you even think "don't deceive" doesn't work most of the time; there are certainly cases where it doesn't, but most of the time lying seems worse than non-lying, probably a vast majority. Possibly you're restricting the cases you find interesting to moral dilemmas? Which are pre-selected as places where our philosophical systems break down?

I will in fact make the much stronger claim that there exists a formal philosophical moral system (libertarian simple non-initiation of force except maybe for law enforcement taxes, combined with punishment of those who do) which is both pretty well-specified and which would work better than the phronetic moral systems actually used by the vast majority of cultures in history. And I don't even think that's the best moral system philosophy has to offer.

by muflax on Sat, 27 Apr 2013 13:54:44 -0700

(First half of a response. Substantial half that addresses the examples comes later.)

Yes, those are the kind of cases I was thinking of, not thought experiments designed to confuse ethical systems (and I thought you meant, too), and this is the kind of claim I disagree with, even though, in those cases, I think we pretty much agree on what the "correct" conclusion is.

(I feel a bit weird effectively asking you to defend deontological systems and repeating the standard questions, like, "Is benefiting from an exploitative economic system aggression?", "Who can I be aggressive against? Do insects count? If not, why not?", "How do you observe an intention anyway?", etc., so I'm not going to do that because I expect we'd both quickly reach a point of "dunno, I don't understand deontology either".)

My point is that, if you use these ethical systems at face value and without knowing the desired conclusions in advance (or at least pretending so), you won't come to these common-sense conclusions. You'll end up even more extreme than Kant. The history of meta-ethics of consequentialism (in all forms) and deontology is, to slightly exaggerate my impression, 10% reasonable attempts at justification and 90% "ok yeah, this straightforwardly implies completely absurd conclusions, but here is my infinite reservoir of clever reasons why this isn't actually the case and why we should only use my theory when we like the implications" (and the rare extremist who bites some of these bullets).

(I'll try to substantiate that later using your examples.)

by muflax on Thu, 09 May 2013 02:44:06 -0700

Ultimately, I think, you can only use Molinism to gain causal leverage over God if agents are, in a sense, non-reducible to individual decisions.

Say you want to specify a person A. You could do that by enumerating all their decisions under all conceivable circumstances, so they might like strawberry ice cream over vanilla ice cream, Spike x Buffy over Angel x Buffy and so on. (This is very similar to listing their full utility function.) The problem with that is that now, if you don't like a certain decision that person makes (say, you'd want the same person, *except* they should prefer Willow x Buffy), you can trivially specify a person B that suits your needs.

This might be intractable for finite minds, but we're concerned with God, so let's handwave that. If such a specification uniquely picks out a person, and there's no inherent impossibility of some specification (say, if only agents that obey the VNM axioms are possible), then God can mix-and-match whatever agent He wants, and you can't blackmail Him because He can always just replace you with a more compliant version.

So for blackmail etc. to work, it seems to me one of these things has to be true:

1) God can't actually predict your actions without creating you (and therefore Molinism is false anyway).

2) Not all conceivable decision-agglomerates are agents, and so God only has a limited stock of people to select from, which gives you straightforward leverage. (There are still coordination issues. If you aren't willing to deliver Jesus to the cross, say, maybe one of the other 11 guys is? Better make sure your advantage to God is either really unique, or you're really good at politics.)

3) The cost of your demands is lower than the cost of selecting a more fitting replacement. This seems... weird to suppose for Being Itself, but if the Creator's more of a computationally-limited simulator, that might still work. I don't see how you'd be able to formulate such costs for God (like, what are His resources you're using up, especially *before* Creation?), but it might give you a way to derive "Molinism lite" for pagans. This should totally work on Odin. (Maybe the blind eye is symbolic of his deliberate unknowing of consequences, so he can't be as easily blackmailed?)

by muflax on Thu, 09 May 2013 02:49:10 -0700

Addendum:

One way that not all specifications are reachable could be that God doesn't have causal power over the world *after* Creation. He can only design the clock and wind it up, but once started, that's it. That might give you some room to propose that only certain starting conditions and laws are "possible" for God to select from, and that those aren't detailed enough to allow arbitrary persons to emerge later on.

(But that's a very Protestant way of thinking about God. Better hope He doesn't use miracles to eradicate those meddling trollinists...)

by muflax on Fri, 10 May 2013 12:14:09 -0700

So I've been mulling over an actual reply for a while now and I give up on writing one (for at least the near future). That's kind of a rude thing to do, and that I don't have an easy to articulate version makes me lower my confidence in the criticism, but there's not much else I can do.

A few points nonetheless, drawn from some of the drafts of a reply I attempted. It might at least hint at some kind of argument.

---

1) The formulation of these (and similar) examples is already highly specific, and so the dilemma "should I do X or not?" already does much of the work that the theory *should* be doing. A fairer formulation would be "this person insulted me, what should I do?" (even though that already presupposes the concept of "insult", which is community-negotiated; all observation is theory-laden, all ethics is narrative-laden...). Utilitarianism, as basically all consequentialism, can't help you much at first because the answer-space is so huge. Deontology is at least somewhat helpful because, as Kant argued, negative duties always overshadow positive duties, and so "doing nothing" is always a moral option (within deontology).

So saying "I know that giving to charity A vs. charity B is more moral because it's more effective, and utilitarianism agrees, so it works" is half-cheating because you've already used non-utilitarian methods to narrow it down to these options. "But picking the best option from a tiny subset is still better than nothing!" is technically true, but the way the subset is constructed is the powerful part and how you control the discourse.

---

2) The failures of utilitarianism aren't detected *by* utilitarianism, and so it's parasitic on a different ethical system. Simplest example is the trolley problem. The elegance of utilitarianism seduces you to look at the "fat man" variant, think "oh, that's structurally the same situation" and then decide it's the same problem, even though it isn't. Pushing the fat man is highly sociopathic, and causes deep trust issues that are worse than a minor saving of lives in this situation, and so clearly the wrong option. Even though utilitarianism *can* certainly go meta enough to notice that and finally give you the correct answer, it doesn't ever tell you *when* you're not meta enough and is in practice highly dangerous *because* humans are so short-sighted. Like in 1), the way moral problems (be they practical or hypothetical) are formulated already does much of the work.

It's my impression, even though that is certainly not a rigorous conclusion, that you can't formulate most actual problems in such a specific enough way that you can think about them in a utilitarian way without either sneaking in all your desired conclusions or being parasitic on traditions, narratives and other methods (much of which labels like "virtue ethics" try to cover, but still fail, I think).

I'm sympathetic to the idea that there might be a consequentialist theory won't have these issues any more, but we don't seem anywhere close. (And this criticism has nothing to do with the values or basic soundness of any consequentialism, just its practicality at this point.)

If you have an actual real-life example, done from scratch, that applies consequentialist reasoning (of any form) and *only* consequentialist reasoning, please tell me! I know how you'd use Kant's philosophical system to derive categorical imperatives from pure reason (because I'm a nerd), and how you'd use virtue-ethics-y methods to derive things like the Seven Heavenly Virtues based on Scripture[^1] and culturally shared narratives (because my mom smothered me in fiction), and so I can see how you'd use these methods to get Advice I Can Actually Do And Feel Confident In without knowing in advance what I want. I have no idea how I'd do this in any form of consequentialism unless I use external methods to narrow down the answer-space and make highly problematic assumptions (like the VNM axioms). Maybe if I had some worked examples, it wouldn't seem so impossible to me!

[^1]: (Am I allowed to use footnotes in comments that already have too many parenthetical asides? Yes? Ok then!) Note that I'm trying to use examples at all times that are exotic enough that there's no current big argument over them and no matter what position I might argue is of little importance to my actual life, but familiar enough that everyone gets how they work. This is why I like talking about Catholicism etc. a lot, even though 6/8ths of me are an atheist pragmatist anyway. This is why I'm very uncomfortable directly addressing your "rape" and "insult" example, and have a hard time being concrete when my instinctive reaction is "if it's a serious insult, the right thing to do is to punch that fucker in the face!", especially because I'm highly paranoid about what values I might (or should) have, and hate to commit to anything. If I wanted examples I actually care about, I'd have to talk much more about the working class and its pathologies in sympathetic and complex terms when you're not supposed to be anywhere near those, and "overcoming" things like fascism from the *inside* is not A Thing One Does, or a problem Good People should ever have. ("Brecht, Benjamin and Broken Dreams" is a good name for my inevitable punk album, though.)

---

3) This is where I'd also point out that Inside View arguments seem *incredibly* sketchy to me (even though I still like them a lot!) and fail in all complex systems I know. You can't plan economies, can't beat index funds, can't prove computer programs correct, can't predict much of anything unless the system is highly artificial or you have huge past experience so you can use the Outside View. This limits the applicability of consequentialism even for straightforward and simple value systems. If there's no theory that can predict financial markets, how'd I predict "utility" so I can optimize the consequences?

And this is not just on the level of "I don't know how to be an Enlightened Monarch", but "I don't know how to have a happy marriage" (which, for me, is actually a more hypothetical example than becoming a cult leader...) or "I don't know what of my projects to work on this week" unless I just assume some unjustified virtues because I happen to have grown up with them or follow standard solutions my community gives me even though no one knows how (or if!) they work, just so I can stop thinking about the problem because none of this gives me anything resembling tight feedback loops.

---

4) I can clarify my reaction to "don't deceive", and most cases of deontological reasoning break down in pretty much the same way.

The reason this rule exists is that a deontologist is committed to treat people as rational and worthy of truth (why they are they can't justify, but don't ask them that), and so it is inherently not morally permissible to (intentionally) speak anything but the truth to another person. This commits you to radical honesty, makes it impossible to use white lies, and as the famous counter-example goes, you can't even lie to a murderous criminal. You don't have to be Robin Hanson to see that this is just blatantly against anything humans do ever. This condemns *magicians* who lie to their audience *with their consent*.

Deontologists - be they Prussian Kantians, American Libertarians or others - can either see that the obvious prescriptions of their rules are absurd and so contextualize and handwave them until they get results they like (which is precisely the criticism), or bite the bullets and become very weird people. ("No aggression? Fuck, I can't even use an oven because that would pollute my neighbor's air!")

---

5) There are additional but not directly related issues that keep contaminating my attempts to think about this, and that either prematurely make me go all confrontational like I'm guest-blogging on Pharyngula, or that are clearly motivated thinking of the form, "I know this can't work, but I have no concrete arguments for this specific case yet - let's find some!".

Unfortunately, I'm not currently able to fix this because a) I suck too much and b) the underlying mechanism that triggers this are very useful and deeply opaque, and so I refuse to dismantle them until I understand them much better. Notably, one of those is a sense of aversion to inelegant design which took a lot of practice to develop, but which I can't usefully introspect much. (A koan might help.)

(Normally in such a situation, one would appeal to some authority instead. Why should we trust cryptographers' highly demanding and apparently superficial intuitions about security design, like rejecting rejecting any security through obscurity as inherently insecure? Because everyone else gets hacked left and right, and PGP still stands. Unfortunately again, I *don't* have any such authority. I am merely very paranoid, and in the fields I do know, rightfully so.)

---

6) This is a somewhat snarky example, but there's the old joke how 9 out of 10 people enjoy gang rape. It seems to me that utilitarianism might give you the correct solution in 1:1 violence cases, but it wouldn't if the attackers outnumber the victims. This kind of reasoning seems to be a feature of utilitarianism ("the good of the many outweighs the good of the few"), and that produces very weird conclusions in many cases.

The issue is not so much that changing a community will change its moral judgments (which all relativist theories will do, and isn't necessarily a problem - but importing new voters is also a known failure mode for democratic communities), but that in most conflicts, utilitarianism will automatically side with the majority. There seem to be commonly accepted "virtues" like justice (regardless of where you get them from) that inherently favor the underdog in *some* cases at least - "violent terrorist X wasn't evil because they opposed a tyranny", "minority Y shouldn't be suppressed because they have inherent value", "once mainstream behavior Z was wrong all along because of these clever reasons", etc. - and so you typically have to either gerrymander your minority into a majority (i.e. that thing every political cause ever does) or handwave how the issues is so much more serious for the minority that it outweighs the majority anyway (i.e. "how to be or at least appear like a utility monster").