Technology makes humans really good at killing other humans. Your average baseline human is already quite good at killing things, but tool-usage really makes us dangerous. However, humans are not nearly as good at not being killed, and technology hasn’t done nearly as well at improving our defensive abilities, even against individuals. This is a problem.

With no tools at all, an attacker needs to catch you and physically overpower you in order to kill you. You can defend against this by running away, or by being physically stronger.

With a sword, an attacker just needs to catch you. You can defend against this by running away; or if you have your own sword, or a shield, or armour.

With a bow, an attacker can kill you from a distance. You can defend against this by running away further; or having a shield, or armour.

With a gun, the range increases, and many kinds of armour become less good (although bullet-proof vests aren’t bad).

So even today defense isn’t doing great. You can wear a bullet-proof vest at all times and live in a state of hyper-alertness to threats, but for most of us our main defense is that nobody is trying to kill us right now. Even for the most paranoid, all a determined attacker has to do is hit you in an unprotected area and attack when you are not expecting it (which is made easier by long-range weapons).

Even worse, most of the improvements in offense also make it easier to kill lots of people. One person with a gun can kill a lot more people more easily than one person with a sword. And someone is going to try and do that.

Griefers

Increasing the destructive power of individuals seems like a questionable idea under the best of circumstances but it’s particularly bad because it empowers the worst section of the population: griefers. Who are griefers? Griefers are the people who just want to see the world burn; they shoot up schools; they set off bombs; or they just have goals that involve a lot of people dying for no good reason. They’re more common than any of us would like.

Powerful griefers are already a problem - think modern terrorists - but they’re not an existential problem. For one thing, humanity has been growing at a sufficient pace that even with the enhanced destructive power available to griefers, they’re not able to make much of a dent on the species. Shooting up a school or blowing up a train station is far more effective havoc than your average medieval griefer could hope for1, but it’s a rounding error on the general population. Nuclear weapons and bio-engineered pandemics up the ante a lot, and existential risks look horribly more likely if they might be engineered deliberately.

At the far end, if your griefer has access to von Neumann machines, then they might just decide to point a Nicoll-Dyson laser at anything resembling intelligent life (why? because they’re assholes) and - bam - there’s your Great Filter. Human extinction is just the beginning.

So we’re screwed?

It’s not clear what we can do about this. As far as I can see we have three options:

  1. Hope that defense picks up its game.
  2. Stop having griefers.
  3. Prevent griefers and offensive tech from coming into contact with each other.

Option 1 seems optimistic. Running away (or otherwise not being there) continues to be a good strategy - you can probably perturb your orbit enough to make the extermi-laser miss you - but that requires you to know the attack is coming. Offense will always have the advantage because it tends to happen first, and so it’s going to get the drop on you sooner or later. Similarly, putting something else in the way continues to work pretty well, but requires preparation. Apart from that, we might be able to develop more specialised defences against particular attacks (medical nanobots might save us from the pandemic), but we still have to react in time.

It’s possible that defense will get some big boosts. Something like a classic sci-fi personal force-field might be a hard counter to projectile weapons. But I’m not terribly optimistic on this front.

As for Option 2, not having griefers would be nice, but I don’t know of any way to achieve that short of mass brain-washing or just never making any new people.2 Neither of those are particularly pleasant options. And if offensive tech becomes sufficiently strong, you may only need a few griefers to really ruin things.

Option 3 is the “Warhammer 40k” option. In the 40k universe, mankind exists under a brutal, oppressive dictatorship that is actually kind of justified, because if you read too many books then demons come out and eat everyone on the planet. So everything is shit, but realistically, the alternatives are worse. Substitute “demons eating everyone” for “you start a deadly pandemic” and that could be us.

Blood for the Blood God! Skulls for the Skull Throne!

The only way the 40k future works out alright is if it’s run by a nigh omnipotent benevolent dictator. The only way we’re getting one of those is Friendly AI.

This leads me to a point rather similar to that which Scott Alexander reaches in Meditations on Moloch. In that essay, it’s the forces of non-optimal cooperation (Moloch) that gradually suck the value from existence, and our only hope is to make an omnipotent godling to save us from game-theorizing ourselves to death.

My Eldritch Power (let’s call it Khorne, since it’s about blowing shit up for the hell of it) is a lot less subtle than Moloch, but it’s got the same roots: there’s a problem with humans as agents (we can’t cooperate/some of us are assholes), and the only way to stop it may be to take away our agency in that respect. Depressingly, Khorne is even more powerful than Moloch. Moloch is an internal threat, but Khorne can kill us even if we handle it ourselves - as Charlie points out, you don’t even need to be in the same solar system for the griefers to get you.

In a world where everyone has godlike power, many people will do good things; some people will do pointless things; and some people will try to kill all the others. My worry is that it’s the last group that will win.

  1. The really effective griefers of the past have occupied positions of power, like kings or emperors. Then you can really kill a lot of people. However, they’re a lot less common, since you have to both be a griefer and achieve a position of power. I’m more interested in what happens when you raise the abilities of the average griefer in the population. 

  2. An Unfriendly AI is, by my definition, a pretty awful griefer (it has goals that involve us all dying). Option 2 is a pretty good approach here: don’t make one! However, if making one becomes sufficiently easy, then we can also think of it as an offensive weapon for the discerning human griefer.