Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Was asking on mastodon if people tried leveraging very concise and high level languages like haskell, prolog with 2025 llms.. I'm really really curious.


the problem there might be limited training data?


Jane Street had a cool video about how you can address lack of training data in a programming language using llm patching. Video is called "Arjun Guha: How Language Models Model Programming Languages & How Programmers Model Language Models"

The big take away is that you can "patch" llms and steer them to correct answers in less trained programming languages, allowing for superior performance. Might work here. Not a clue how to implement, but stuff to llm-to-doc and the like makes me hopeful


So you're saying we should be vibe coding more open source stuff in languages for discerning programmers ;)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: