Everyone seemed to be very excited about Sapiens, Yuval Harari’s previous book, and I do like futurism, so I thought I’d give this a shot.

Firstly, it’s well-written. Harari’s style is engaging and direct. And when I say “direct” I mean he just says what he thinks. If he has an opinion on a topic, he just states it. And when he wants to make a case for a controversial position he doesn’t hedge. Biology shows that we are just organisms following the rules of nature, therefore freedom is a myth. God doesn’t exist, religions exist primarily as social structures to facilitate cooperation. There is nothing that clearly separates us from the animals, therefore our treatment of them is unconscionable. It’s a strong dose of materialist history, and I like it.

On the other hand, this means that when I disagree with him, he often provides little in the way of justification (no doubt those who disagree with his other claims will feel the same way!). In fact, I often found his analysis simplistic to a fault, even when I happened to agree with his conclusions. I’m not sure to what degree this is a tactic to fit more into the book (which is already longish for a popular book), or whether it reflects his actual thinking, but it’s vexing.

Another annoyance is that Harari persists in calling everything a “religion”, including modern ideologies like liberalism and humanism.

Religion is created by humans, not gods, and it is defined by its social function rather than by the existence of deities. Religion is any all-encompassing story that confers superhuman legitimacy on human laws, norms and values.

To which my response is “that’s not what that word means”. Like it or not, “religion” has a bunch of existing connotations and you can’t just repurpose it. Harari seems to me to argue that all religions are ideologies, therefore he can call all ideologies religions. I found this persistently irritating - just substitute “ideology” for “religion” throughout and it will be less annoying.

Moreover, Harari misses out distinguishing value systems from what I’m going to call (for lack of a better word right now) interpretative frames. Many ideologies let us understand the world in a certain way, often involving entities of its own invention. They may also make some normative claims, which might involve said entities (or might not). Harari says that socialism thinks that everybody matters, and that the best way to benefit everyone is for the workers to own the means of production. But these two things are quite separate, and I think this blinds him to the possible variety of ideologies there might be.

Harari is also very taken with the idea that our institutions are fictions. To him this points to the human ability to tell stories as the feature that has made us successful. And he may not be wrong about that. But an incautious reader could easily come away thinking that he is presenting a more radical view of history than he is - again, substituting the word “institution” for “myth” throughout reveals a more prosaic historical narrative.

One thing I think Harari gets very right is to focus on the degree to which upcoming technological change is likely to undermine really disturbingly central parts of how we think about ourselves. Things like: the individual; the will (reading his section about tailored advice, I was reminded heavily of the Whispering Earring); the importance (nay, relevance) of most humans to the economy. I think he’s hit the nail on the head about the way even our most staunchly materialist intellectuals champion the biological, deterministic model of humanity, but then turn around and say “don’t worry, liberal humanism will be fine somehow, hooray!” How are we going to cope with being able to chose what to want? I don’t think anyone has any idea.

This strikes me as an instance of what I think is the scariest philosophical problem ever: what if we have an ontological crisis about something that is at the root of our value system? Like, say, people? For example, in a world with uploaded humans sharing computational resources, memory space, and even past histories, how many people are there? What happens when our person-detector starts coming up “???”? And that’s before we even start worrying about different kinds of experience

Harari also points out (but doesn’t discuss as much as I’d have liked), the argument that liberalism and humanism have won in the past century because they were just better at running functional large countries. But that’s a trend that seems unlikely to continue indefinitely, and so the question is: will we inevitably bow to whatever is the most efficient? Or will we cling on to our current approach?

I don’t think Harari has much to offer in terms of actually predicting the future. His survey of possible future “techno-religions” is especially weak, including only transhumanism (acknowledging that it faces many of the same issues as humanism); and “data religion” which is a combination of some reasonable interpretative claims about how data-processing has developed, along with some ludicrous normative claims that “data-processing is what matters”. I don’t think anyone actually believes that, and I certainly don’t think it’s a plausible future ideology. I think Harari got stung by his conflation of value systems and interpretative frames here - there is a certainly an interpretative frame in which data is paramount, but I really can’t see it taking on a normative role.

Ultimately I just don’t think he’s thinking weird enough. Barring AI making things really unpredictable, how about something like an ascended economy as a consequence of the elimination of humans from the economy? And even that’s probably not weird enough.

Anyway, that’s my summary of Homo Deus: looking in the right direction, but a bit wrong and not weird enough. Probably not worth reading if you’ve already read a modest amount of futurism, but pretty out there for a popular book.