• I work in software. A perennial bugbear of software project management is: why do so many software projects go over time? Moreover, why do they do this when so much time is spent trying to break down projects and get engineers to estimate how long the pieces will take?

The answer is simple: things take longer than we expect. And we know that we’re uncertain about our estimates when we make them, i.e. there’s actually some probability distribution over the time the task will take. So why are we surprised that things blow out and why don’t we have the tools to measure and deal with this?

• “Orphan” typeclass instances are instance declarations instance T A that occur in any module other than

1. the module where the class T is defined, or
2. the module where the type A is defined

The orthodox Haskeller viewpoint is that orphan instances are bad and you should never write them, because they can lead to incoherence. Incoherence is where we end up using two differing instances for the same type in our program. This can manifest in two unpleasant ways:

1. If you actually import both instances, your program will fail to compile.
2. If you do not directly import both, but rather use two modules which independently use the differing instances, you can end up with incoherent behaviour.

Both of these are pretty bad problems, in that neither of them is immediately apparent when you write the offending instance, but down the line they can cause some unsuspecting user’s code to not compile, or worse, silently misbehave.

However, these failures can only happen if is possible for an unsuspecting user to import the type and the instance separately. Moreover, in the case of compilation failure, the user must be unable to fix the source of the problem. With a little bit of care, we can use orphans quite safely so long as we can avoid the problematic cases.

• If there’s one thing compiler writers spill a lot of ink over, it’s tree traversals. Tree traversals are infuriatingly simple conceptually, but can be a lot of boilerplate to actually write. This post covers a couple of tricks that I’ve found useful recently using tools from lens.

Let’s suppose we have a simply lambda calculus with some primitive integer operations.

{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE RankNTypes #-}
module LensesForTreeTraversals where
import Control.Lens
import Data.Monoid
import Data.Functor.Foldable hiding (fold)
import Data.Foldable

type Name = String

data Type = IntegerType | FunType Type Type

data Term =
Var Name
| Lam Name Type Term
| App Term Term
| Plus Term Term
| Constant Integer


We’d like to write a simple constant-folding pass over this AST. The algorithm is extremely simple, in prose:

Recursively transform all the subterms, and if the resulting node is then the sum of two constants, replace it with a constant equal to their sum.

We would really like to reach this level of clarity in our code. The subtlety, of course, is how we talk about “all the subterms”, and that will be the theme of this post.

A naive implementation of the constant folding function is as follows:

-- | Do the local part of the constant folding transformation.
cf :: Term -> Term
cf = \case
Plus (Constant i1) (Constant i2) -> Constant (i1 + i2)
x -> x

constantFold :: Term -> Term
-- Do the local transformation after recursively calling ourselves
-- on the subterms, if any
constantFold t = cf $case t of Plus t1 t2 -> Plus (constantFold t1) (constantFold t2) Lam n ty t -> Lam n ty$ constantFold t
App t1 t2 -> App (constantFold t1) (constantFold t2)
x -> x


Tediously, we have to explicitly pull out each subterm and call the function on it recursively, which is not only boilerplate, but error-prone (nothing will tell us if we’ve missed a subterm!).

## Traversals

Now, we want to do something for every subterm, which sounds like the sort of thing we should be able to do with traverse. But it’s a pain to make Traversable work with our AST. In particular Traversable expects your type to have kind * -> *, i.e. to be able to “contain” values of any type. This isn’t really right for us: a term is like a monomorphic container of its subterms.

Enter Traversals from lens. A traversal is very closely related to Traversable.traverse:

-- traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
-- type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t


A Traversal is a traverse-like function, but it can be more specific if that’s appropriate. In particular, we can define a Traversal that only traverses the subterms of a term.

(Maybe there’s a clever way to write traversals, but I do it the stupid way: take the effectful function and the value, and apply it to the subparts.)

-- Traversal' a b = Traversal a a b b, useful if you're not doing clever stuff
termSubterms :: Traversal' Term Term
termSubterms f = \case
Lam n ty t -> Lam n ty <$> f t App t1 t2 -> App <$> f t1 <*> f t2
Plus t1 t2 -> Plus <$> f t1 <*> f t2 -- Terms without subterms. Note that you should *not* put 'f x' as the -- RHS: that would say that the term was its own subterm! x -> pure x  What does termSubterms do? It’s like traverse: you give it a function that does stuff to subterms, and it will give you one that does that to all the subterms of a particular term. It’s also usable with a lot of lens functions, for example, it’s a Setter so you can write to all the targets of a Traversal. Now we can write our constant folder more cleanly: constantFold2 :: Term -> Term -- 'over' applies a function to all targets of an optic, we can use it with a -- 'Traversal' to apply the function to all of the things which it traverses -- (the subterms) constantFold2 t = cf$ over termSubterms constantFold2 t


I think this is pretty close to our original English specification!

We can abstract this a little bit further to factor our the local part of the transformation. Then we get a nice little function transformOf which is already defined for us in Control.Lens.Plated.

-- The real version is a bit more general than this
-- transformOf :: Traversal' a a -> (a -> a) -> (a -> a)
-- transformOf l f = go where go = f . over l go

constantFold3 :: Term -> Term
constantFold3 = transformOf termSubterms cf


## Folds and mixing Traversals

As a bonus, we can also do folds. For example, let’s count the number of term nodes in a term.

countTerms :: Term -> Sum Integer
countTerms t =
-- The number of terms is 1...
Sum 1
-- ... plus the number of terms in all the subterms
<> foldMapOf termSubterms countTerms t


Note that we don’t need to do any case analysis since the algorithm happens to be completely generic over the different kinds of term.

This problem also gives us the opportunity to show off another very useful feature of Traversals: we can mix and match various different Traversals of the same type. In particular, we haven’t counted the types within our terms: if we care about the number of AST nodes then we should probably count them too!

We need a couple more traversals: one to get the types within a term, and one to get the types within a type.

termSubtypes :: Traversal' Term Type
termSubtypes f = \case
Lam n ty t -> Lam n <$> f ty <*> pure t x -> pure x typeSubtypes :: Traversal' Type Type typeSubtypes f = \case FunType ty1 ty2 -> FunType <$> f ty1 <*> f ty2
x -> pure x


Now we can say how to count the nodes in a type, and how to count the nodes in a term. Again, this is completely generic and doesn’t need to do any case analysis.

countTypeNodes :: Type -> Sum Integer
countTypeNodes t =
-- The number of nodes is 1...
Sum 1
-- ... plus the number of nodes in all the subtypes
<> foldMapOf typeSubtypes countTypeNodes t

countTermNodes :: Term -> Sum Integer
countTermNodes t =
-- The number of nodes is 1...
Sum 1
-- ... plus the number of nodes in all the subterms
<> foldMapOf termSubterms countTermNodes t
-- ... plus the number of nodes in all the subtypes
<> foldMapOf termSubtypes countTypeNodes t


There are a bunch of other nice tools lens, although as ever that depends on your willingness to wade through it.

## Addendum: why not recursion-schemes?

You can achieve some of these goals withe recursion-schemes. The style there is to define your type as a fixpoint of a “one-level” functor.

data Term2F a =
Var2F Name
| Lam2F Name Type a
| App2F a a
| Plus2F a a
| Constant2F Integer
deriving (Functor, Foldable, Traversable)

type Term2 = Fix Term2F


Now, recursion schemes is all about folds, so we can go ahead and “fold” (cata) our term into another term.

cf' :: Term2F Term2 -> Term2F Term2
cf' = \case
Plus2F (Fix (Constant2F i1)) (Fix (Constant2F i2)) -> Constant2F (i1 + i2)
x -> x

constantFold4 :: Term2 -> Term2
constantFold4 = cata (embed . cf')


Also, Term2F is of the right shape for Traversable, so we can use traverse as well!

This is fine as far as it goes, but there are two major problems:

1. recursion-schemes does badly with mutually recursive types. If this is a problem for you, you’ll realise pretty quickly.
2. recursion-schemes is good at dealing with the sub-parts of the same type, but not those of different types.

For example, let’s try and write countTermNodes.

countTermNodes2 :: Term2 -> Sum Integer
countTermNodes2 = cata f where
f = \case
Lam2F _ ty tc -> Sum 1 <> countTypeNodes ty <> tc
x -> Sum 1 <> fold x


The normal case is similarly concise, but there’s no way to handle the types generically. So we have to do the case for Lam manually, and we would have to do this for every term with a type in, if we had others.

RESPONSES: This post by Oleg Grenrus tackles the mutually recursive traversals problem, and this comment by Chris Penner gives an alternative way of doing the recursion.

• Certain people (myself included) have a habit of reflexively attacking new arguments or ideas. Amanda Askell calls this “shark curiosity”: sharks bite things partly because their only real way of interacting with the world is their mouth, so biting is their way of finding out what something is.

Taken literally this implies a rather bleak picture (you attack things because that’s the only way you know to interact with them!?). But I think that the point is rather that to the shark, biting is not necessarily an aggressive action. For curious sharks anyway!

## Collaborative biting

This is intimately related to collaborative and combative discussions. Curious sharks feel like attacking an argument is collaborative, rather than combative. This may sound counter-intuitive, but I think it’s often a good attitude.

Firstly, it requires you to be personally distanced from your ideas, so you don’t feel personally attacked if someone criticises your ideas. This is good: being personally invested in your ideas biases you towards them and makes it hard for you to abandon or modify them if they are wrong! And everyone is wrong – a lot.

This switch in orientation is key to collaborative discussion. Rather than A versus B, it’s A and B versus the problem. The idea is something we both care about (a potential solution to the problem), so we both want to know if it works, and if not, how it is broken. In this frame, someone finding the flaws in your argument is great – it means they’re getting properly engaged in the problem!

Secondly, it fosters an attitude of base scepticism towards ideas. One thing that studying philosophy taught me is that it’s incredibly easy to make convincing arguments for any and all sides of an issue. Most arguments are bad: unceasing critical thought is our main defence against this.

## Don’t be an asshole

Of course, sometimes biting is aggressive. And human culture is complicated, so what signals an attack may be influenced by everything from the setting, to the audience, to the relative statuses of the participants. And plenty of people are just assholes. So sharks have to be very careful not to come across that way.

For me, the most important thing I’ve learned is just to bite gently. Qualify your criticism with uncertain language to make it feel less strong; praise ideas before criticising them; work to establish a collaborative frame of discussion; and so on.

Knowing your audience is also very important. Some people will be quite happy to get chomping, others will rarely if ever enjoy it. Be sensitive.

Finally, as Askell says, be careful not to squash new ideas. New ideas are often bad until they’ve been worked on a lot – if we attack them too much we may persuade people to abandon them prematurely. The best thing I can think of is again to be collaborative: if you can see a hole but you think it’s fixable, say so! Maybe you can even work on the patching together.

• From Annihilation by Jeff VanderMeer:

“So we have nothing.”

The surveyor ignored that, said “What do we do now?” It was clear she hated asking the question.

“Eat dinner,” I said. “Take a little stroll along the perimeter to make sure the psychologist isn’t hiding in the bushes. Think about what we’re doing tomorrow.

"”I’ll tell you one thing we’re not doing tomorrow. We’re not going back into the tunnel.”

“Tower.”

She glared at me.

Sometimes you just have to call it a day. There’s nothing more you can do (or nothing you want to do), and the best things is to wait. Either for a change in the situation so you can do more, or a change in yourself so you can see what to do.

Our circadian rhythm is something of a blessing here. Sleep allows time time to pass easily, and our minds to do their mysterious subconscious processing that defeats so many problems. And you have to do it, so “sleep on it” is much easier advice to take than “wait 8 hours and see what happens”.

Sometimes I’ve wished I never had to sleep. I wonder how many odd, contingent benefits we would lose if we no longer slept. Perhaps we would be more patient… but I suspect the opposite.