Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

mfix is somewhat low-level. It’s used in the desugaring of Haskell’s “do rec” notation.

I find it useful because it lets me take a recursive structure[1] and a multi-pass algorithm[2] on that structure, which involves side effects[3], and express it in code as a single pass, without mutation.

This conversion of multi-pass algorithms to a single pass while retaining time/space complexity guarantees is one of the key benefits of laziness in general.

[1]: e.g., an AST

[2]: e.g., type inference, where pass 1 is generating type constraints, and pass 2 is solving the constraints and annotating the tree with the final inferred types

[3]: e.g., generating fresh type variables



I'd love to hear more detail about the example you give of type inference over an AST. I like the idea, but I'm having trouble envisioning how to write code in that style cleanly. Did you implement that in an open source project I could take a look at?


It’s not the prettiest code, but here:

https://github.com/evincarofautumn/kitten/blob/8a7e949f1af71...

The key line is:

    (term', t, tenvFinal) <- inferType dictionary tenvFinal' tenv term
In “inferType”, I just annotate each term with “Zonk.type_ tenvFinal type_” as I go, in the same pass as generating the type constraints. Since “tenvFinal” is the final type environment, and “zonking” substitutes all type variables with their solved types, everything gets lazily resolved when reading the annotated types later on, e.g., during analysis & codegen.


Thanks, that helped me understand better. That's really cool!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: