• Post-consequentialist agents

    John cares about nothing but making the most amount of money that he can. He comes to believe that the best way to do this is to start a new company, but also that companies are most likely to succeed when the founder is intrinsically motivated by the work of the company. So John immerses himself in a domain and cultivates an interest. When he finally starts his company, he does it out of a genuine obsession with the idea. He is successful and makes a lot of money.

    This is a prosaic example of an agent modifying themselves to the point at which they are not longer explicitly pursuing their original goals (although the idea is to achieve them nonetheless). Before immersing himself in his field, John1 primarily pursues money. Afterwards, John2 primarily pursues his startup idea, and is willing to sacrifice other ways to make easy money in order to work on it.

    Bernard Williams famously argued that consequentialism is self-effacing: as a theory it recommends that an agent not act according to it or even believe in it. But this is fine, and indeed really a sensible feature of goal-driven system. Believing in or acting according to a moral system is a feature of the world, and a such subject to assessment as to whether it furthers our goals. If an evil demon threatens to torture the world unless we all become Kantians, then we should by golly become Kantians. So we should not generally be surprised to see such “post-consequentialist” agents around.1

    Reflective vs thorough

    However, most consequentialists that I know adopt this as a kind of reflective non-consequentialism. That is, in your normal day-to-day life you act as a non-consequentialist, but when you reflect on your life you are a consequentialist. This is good, since it allows for ongoing correction of your non-consequentialist-but-supposedly-good-overall behaviours via meta-level consequentialist reflection.

    But consequentialist agents can certainly turn into thoroughly non-consequentialist agents who don’t even think consequentially at the meta level. This could happen deliberately, or perhaps just accidentally: if you spend enough time acting non-consequentially you may come to believe it even reflectively. In comparison to a reflective non-consequentialist, a thorough non-consequentialist is an unguided missile: they will keep executing the behavioural strategy that they initially decided on, and can’t correct course later. This is a pretty hefty cost, and risky in a rapidly changing world. We should try and keep the ability to reflectively correct ourselves according to whatever we think the true moral theory is unless there are circumstances that really penalise that.

    Layered reflectivity

    That said, people can have several layers of reflectivity. There is a saying which I have heard (probably falsely) attributed to the Japanese:

    A man has a false heart in his mouth for the world to see, another in his breast to show to his special friends and his family, and the real one, the true one, the secret one, which is never known to anyone except to himself alone, hidden only God knows where.

    I think many people have different modes of assessment that they deploy in different circumstances. It feels very different to do everyday reflection on my life versus deeper or broader assessment of it. So maybe even an apparently thorough non-consequentialist may keep consequentialism in their secret heart.

    1. When I wrote this I was thinking about consequentialism the moral theory, but the same arguments really apply to any value-maximizing goal system. Consequentialism is in many ways just a special case of “expected-value rationality” but applied to particularly moral aims. 

  • Moderating technological progress: a bitter pill

    From The Precipice: Existential Risk and the Future of Humanity by Toby Ord:

    Growing up, I had always been strongly pro-technology. If not for the plausibility of these unconsummated catastrophic risks, I’d remain so. But instead, I am compelled towards a much more ambivalent view. I don’t for a moment think we should cease technological progress—indeed, if some well-meaning regime locked in a permanent freeze on technology, that would probably itself be an existential catastrophe, preventing humanity from ever fulfilling its potential.

    But we do need to treat technological progress with maturity. We should continue our technological developments to make sure we receive the fruits of technology. Yet we must do so very carefully, and if needed, use a significant fraction of the gains from technology to address the potential dangers, ensuring that the balance stays positive. Looking ahead and charting the potential hazards on our horizon is a key step.

    This is oddly one of the things I find hardest to accept about existential risk. From a purely selfish point of view, I care much less about parting with my resources than about the idea that I might get to see less of the future.

    Imagine if well-managed general AI ushers in an era of wild progress: space elevators, life extension, world peace, the whole shebang. But - since we managed it well, it takes another 50 years to arrive and our generation is all dead by that point. That hurts. There’s a part of me that wants to rush and throw the dice just so I might get to see all that.

    But this is right and good. The whole point is to make sacrifices now to benefit others or the future. We should expect some of those sacrifices to hurt!1 Let us not be fair-weather altruists.

    So I agree with Ord: we should slow down development of dangerous technologies where it allows us to make them safer. And where it doesn’t currently help (e.g. because of competition between unscrupulous parties) that is a problem which we should try and solve (e.g. through international cooperation on regulation).

    1. They don’t all need to hurt: I think giving a fraction of your income is a sacrifice that many people in our society can manage without it actually hurting much at all. 

  • Alexander Technique, maybe

    I read this post by my friend Lulie, and while I have not idea whether this relates to actual Alexander Technique it was interesting.

    Expanded awareness

    I tried doing the expanded awareness thing for most of a day, including a fairly long walk. I do think this actually helped my posture: I noticed that if I was even peripherally aware of the whole of my body, I would do a lot of mostly-subconscious “fixing” of posture issues. This is particularly interesting since, despite not really thinking about it, I did end up doing a lot of things that I have been previously taught consciously to do. For example, rotating my hips forward to keep my spine straighter, which is something that I’ve always found somewhat unnatural, so it was surprising to find myself doing it automatically.

    However, I don’t know how this will go when I’m doing more focussed activity (like programming) or in a position where it’s harder to have good posture. But it seems quite promising so I’m going to try and do it more.


    I have mixed feelings about the “non-doing” idea. I do think I know what this means: often if I need to do something I really don’t want to do, I eventually manage it by “sneaking up” on myself. This feels like a semi-autonomous part of myself picking up the action and performing it, and is often associated with a feeling of letting go or suspension of my judgement about why I don’t like the thing.

    Sounds good, right? But I’m not sure I actually like this way of acting. Contra the post, it doesn’t really feel harmonious, but more like doing something with my eyes closed because I don’t want to look at it, especially since it usually happens when my conscious mind is freaking out about something. Surely It would be better to do it with my eyes open!

    Even worse, I feel… dangerously cool when I act like that. I think that’s how I would feel if I had to fight for my life, or something similarly drastic. I’m not sure I want to feel like that all the time, and I’ve often associated this part of myself with the “lizard brain” since it seems to be “under” the rest and very dispassionate.

    All that said, I think it’s also true that we use this mode of action a lot in normal life (like fiddling with stuff), so I’m probably overly focussing on emotionally salient examples. I’m at least going to try it a bit more and see how it goes.

  • Notebook posts

    I’m going to try and write a bit more, blatantly stealing the idea from David MacIver’s Notebook. The main idea is just to have a blog – or in my case, a section of my blog – dedicated to low-effort just-get-something-out writing.

    After all, quantity frequently beats quality in terms of building skill:

    The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot - albeit a perfect one - to get an “A”.

    Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work - and learning from their mistakes - the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

    Almost certainly apocryphal, but I think the maxim is correct in some cases.

    All these posts will be tagged with ‘notebook’ to distinguish them from my (non-existent) main blog output.

    I’m also maybe going to try another of David’s writing tips, which is to take random prompts from a book, but we’ll see how that goes.

  • My giving in 2020

    I haven’t been doing terribly well at my Giving What We Can pledge recently. This was largely for tactical reasons: for the last couple of years I’ve been getting paid via a limited company and the associated novel accountancy requirements made me a bit leery of giving away large chunks of money I might actually have to give to the government. Anyway, I’ve corrected that now: I gave £50k to the Effective Altruism Funds, split between the Animal Welfare and Long Term Future funds.

    A couple of things have changed since last time I gave some money, which I thought I should write down.

    Read on →