Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's interesting do you have any idea why the mental load jumped? would a static analysis tool working with your type hints help?

Or is there many things to consider at once in the system instead of many individual things?



The biggest issue I keep running into is just the concept of data "shape". I love that clojure gives you so much freedom but it can be quite the footgun in a large system because you see that a function expects a map with keys :foo, :bar, and :baz but what are the values for those keys? Spec helps a little bit here for primitive values but for complex nested structures (e.g. {:a [{:b [1 2]} {:c "bar"}]}), it doesn't do much. So, as data moves through the system, and as the system grows it has become increasingly difficult to track the mutations to the underlying structures.

I do think that a static analysis tool would be of some help. Some sort of tooling to better handle tree structures would be very handy. I often find myself being off-by-one level with get-in calls on tree data (E.g. (get-in m [:a :b :d]) where m is {:a {:b {:c {:d 1}}}}), which is annoying because the NPE gets thrown 3 function calls up the stack.


I'm late to the thread, but I've felt the same pain and wrote a library to help deal with the cognitive load of working on "shapes" of data [1].

[1] https://github.com/escherize/tracks


Totally not saying "you're holding it wrong", but maybe once you're more than a few couple levels deep into a nested map it's time to look at an in memory db like [Datascript](https://github.com/tonsky/datascript)? Actual Datalog queries can replace get-in vectors growing out of hand, and you also get better mutations with transactions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: