Certain people (myself included) have a habit of reflexively attacking new arguments or ideas. Amanda Askell calls this “shark curiosity”: sharks bite things partly because their only real way of interacting with the world is their mouth, so biting is their way of finding out what something is.
Taken literally this implies a rather bleak picture (you attack things because that’s the only way you know to interact with them!?). But I think that the point is rather that to the shark, biting is not necessarily an aggressive action. For curious sharks anyway!
This is intimately related to collaborative and combative discussions. Curious sharks feel like attacking an argument is collaborative, rather than combative. This may sound counter-intuitive, but I think it’s often a good attitude.
Firstly, it requires you to be personally distanced from your ideas, so you don’t feel personally attacked if someone criticises your ideas. This is good: being personally invested in your ideas biases you towards them and makes it hard for you to abandon or modify them if they are wrong! And everyone is wrong – a lot.
This switch in orientation is key to collaborative discussion. Rather than A versus B, it’s A and B versus the problem. The idea is something we both care about (a potential solution to the problem), so we both want to know if it works, and if not, how it is broken. In this frame, someone finding the flaws in your argument is great – it means they’re getting properly engaged in the problem!
Secondly, it fosters an attitude of base scepticism towards ideas. One thing that studying philosophy taught me is that it’s incredibly easy to make convincing arguments for any and all sides of an issue. Most arguments are bad: unceasing critical thought is our main defence against this.
Don’t be an asshole
Of course, sometimes biting is aggressive. And human culture is complicated, so what signals an attack may be influenced by everything from the setting, to the audience, to the relative statuses of the participants. And plenty of people are just assholes. So sharks have to be very careful not to come across that way.
For me, the most important thing I’ve learned is just to bite gently. Qualify your criticism with uncertain language to make it feel less strong; praise ideas before criticising them; work to establish a collaborative frame of discussion; and so on.
Knowing your audience is also very important. Some people will be quite happy to get chomping, others will rarely if ever enjoy it. Be sensitive.
Finally, as Askell says, be careful not to squash new ideas. New ideas are often bad until they’ve been worked on a lot – if we attack them too much we may persuade people to abandon them prematurely. The best thing I can think of is again to be collaborative: if you can see a hole but you think it’s fixable, say so! Maybe you can even work on the patching together.
From Annihilation by Jeff VanderMeer:
“So we have nothing.”
The surveyor ignored that, said “What do we do now?” It was clear she hated asking the question.
“Eat dinner,” I said. “Take a little stroll along the perimeter to make sure the psychologist isn’t hiding in the bushes. Think about what we’re doing tomorrow.
"”I’ll tell you one thing we’re not doing tomorrow. We’re not going back into the tunnel.”
She glared at me.
Sometimes you just have to call it a day. There’s nothing more you can do (or nothing you want to do), and the best things is to wait. Either for a change in the situation so you can do more, or a change in yourself so you can see what to do.
Our circadian rhythm is something of a blessing here. Sleep allows time time to pass easily, and our minds to do their mysterious subconscious processing that defeats so many problems. And you have to do it, so “sleep on it” is much easier advice to take than “wait 8 hours and see what happens”.
Sometimes I’ve wished I never had to sleep. I wonder how many odd, contingent benefits we would lose if we no longer slept. Perhaps we would be more patient… but I suspect the opposite.
John cares about nothing but making the most amount of money that he can. He comes to believe that the best way to do this is to start a new company, but also that companies are most likely to succeed when the founder is intrinsically motivated by the work of the company. So John immerses himself in a domain and cultivates an interest. When he finally starts his company, he does it out of a genuine obsession with the idea. He is successful and makes a lot of money.
This is a prosaic example of an agent modifying themselves to the point at which they are not longer explicitly pursuing their original goals (although the idea is to achieve them nonetheless). Before immersing himself in his field, John1 primarily pursues money. Afterwards, John2 primarily pursues his startup idea, and is willing to sacrifice other ways to make easy money in order to work on it.
Bernard Williams famously argued that consequentialism is self-effacing: as a theory it recommends that an agent not act according to it or even believe in it. But this is fine, and indeed really a sensible feature of goal-driven system. Believing in or acting according to a moral system is a feature of the world, and a such subject to assessment as to whether it furthers our goals. If an evil demon threatens to torture the world unless we all become Kantians, then we should by golly become Kantians. So we should not generally be surprised to see such “post-consequentialist” agents around.1
Reflective vs thorough
However, most consequentialists that I know adopt this as a kind of reflective non-consequentialism. That is, in your normal day-to-day life you act as a non-consequentialist, but when you reflect on your life you are a consequentialist. This is good, since it allows for ongoing correction of your non-consequentialist-but-supposedly-good-overall behaviours via meta-level consequentialist reflection.
But consequentialist agents can certainly turn into thoroughly non-consequentialist agents who don’t even think consequentially at the meta level. This could happen deliberately, or perhaps just accidentally: if you spend enough time acting non-consequentially you may come to believe it even reflectively. In comparison to a reflective non-consequentialist, a thorough non-consequentialist is an unguided missile: they will keep executing the behavioural strategy that they initially decided on, and can’t correct course later. This is a pretty hefty cost, and risky in a rapidly changing world. We should try and keep the ability to reflectively correct ourselves according to whatever we think the true moral theory is unless there are circumstances that really penalise that.
That said, people can have several layers of reflectivity. There is a saying which I have heard (probably falsely) attributed to the Japanese:
A man has a false heart in his mouth for the world to see, another in his breast to show to his special friends and his family, and the real one, the true one, the secret one, which is never known to anyone except to himself alone, hidden only God knows where.
I think many people have different modes of assessment that they deploy in different circumstances. It feels very different to do everyday reflection on my life versus deeper or broader assessment of it. So maybe even an apparently thorough non-consequentialist may keep consequentialism in their secret heart.
When I wrote this I was thinking about consequentialism the moral theory, but the same arguments really apply to any value-maximizing goal system. Consequentialism is in many ways just a special case of “expected-value rationality” but applied to particularly moral aims. ↩
From The Precipice: Existential Risk and the Future of Humanity by Toby Ord:
Growing up, I had always been strongly pro-technology. If not for the plausibility of these unconsummated catastrophic risks, I’d remain so. But instead, I am compelled towards a much more ambivalent view. I don’t for a moment think we should cease technological progress—indeed, if some well-meaning regime locked in a permanent freeze on technology, that would probably itself be an existential catastrophe, preventing humanity from ever fulfilling its potential.
But we do need to treat technological progress with maturity. We should continue our technological developments to make sure we receive the fruits of technology. Yet we must do so very carefully, and if needed, use a significant fraction of the gains from technology to address the potential dangers, ensuring that the balance stays positive. Looking ahead and charting the potential hazards on our horizon is a key step.
This is oddly one of the things I find hardest to accept about existential risk. From a purely selfish point of view, I care much less about parting with my resources than about the idea that I might get to see less of the future.
Imagine if well-managed general AI ushers in an era of wild progress: space elevators, life extension, world peace, the whole shebang. But - since we managed it well, it takes another 50 years to arrive and our generation is all dead by that point. That hurts. There’s a part of me that wants to rush and throw the dice just so I might get to see all that.
But this is right and good. The whole point is to make sacrifices now to benefit others or the future. We should expect some of those sacrifices to hurt!1 Let us not be fair-weather altruists.
So I agree with Ord: we should slow down development of dangerous technologies where it allows us to make them safer. And where it doesn’t currently help (e.g. because of competition between unscrupulous parties) that is a problem which we should try and solve (e.g. through international cooperation on regulation).
They don’t all need to hurt: I think giving a fraction of your income is a sacrifice that many people in our society can manage without it actually hurting much at all. ↩
I read this post by my friend Lulie, and while I have not idea whether this relates to actual Alexander Technique it was interesting.
I tried doing the expanded awareness thing for most of a day, including a fairly long walk. I do think this actually helped my posture: I noticed that if I was even peripherally aware of the whole of my body, I would do a lot of mostly-subconscious “fixing” of posture issues. This is particularly interesting since, despite not really thinking about it, I did end up doing a lot of things that I have been previously taught consciously to do. For example, rotating my hips forward to keep my spine straighter, which is something that I’ve always found somewhat unnatural, so it was surprising to find myself doing it automatically.
However, I don’t know how this will go when I’m doing more focussed activity (like programming) or in a position where it’s harder to have good posture. But it seems quite promising so I’m going to try and do it more.
I have mixed feelings about the “non-doing” idea. I do think I know what this means: often if I need to do something I really don’t want to do, I eventually manage it by “sneaking up” on myself. This feels like a semi-autonomous part of myself picking up the action and performing it, and is often associated with a feeling of letting go or suspension of my judgement about why I don’t like the thing.
Sounds good, right? But I’m not sure I actually like this way of acting. Contra the post, it doesn’t really feel harmonious, but more like doing something with my eyes closed because I don’t want to look at it, especially since it usually happens when my conscious mind is freaking out about something. Surely It would be better to do it with my eyes open!
Even worse, I feel… dangerously cool when I act like that. I think that’s how I would feel if I had to fight for my life, or something similarly drastic. I’m not sure I want to feel like that all the time, and I’ve often associated this part of myself with the “lizard brain” since it seems to be “under” the rest and very dispassionate.
All that said, I think it’s also true that we use this mode of action a lot in normal life (like fiddling with stuff), so I’m probably overly focussing on emotionally salient examples. I’m at least going to try it a bit more and see how it goes.