Hacker Newsnew | past | comments | ask | show | jobs | submit | iamwil's commentslogin

Blow originally did Order of Sinking Star as a quick side project. He thought that by using these pre-existing games as a starting point, he'd get it done quicker. But then he decided to experiment with the combinatorics of these mechanics that the game blew up so much in scope that the original starting point didn't help at all.

It's a puzzle maker's puzzle game. The reason why it's so lauded is because the design is so tight. Kinda like how there's certain buildings the public thinks is ugly, but architects all like it because it tickles that part of the architect brain. It's a game that gives you that ah-ha moment. Kinda that moment where you walk from the forest into a clearing, but for your brain.

The game is hard. I only kinda got the hang of it, and I didn't quite get to that ah-ha moment. You have to be willing to sit with it and think. I think with sokoban games, you can often just almost random walk your way to a solution, because the state space and its transitions is easy enough to wander into. But I didn't find that to be the case with SSR. You have to be able to reason about the state space changes, I think because the state space isn't exactly euclidean, so it's harder to wander into the solution.



Sokoban is a common word within puzzle game fans and devs. That article wasn't written for people that didn't like those kinds of puzzles in the first place.

This is great. You might be interested in Matt Keeter's work on Implicit surfaces, and using interval math for its optimization:

https://youtu.be/UxGxsGnbyJ4?si=Oo6Lmc4ACaSr5Dk6&t=1006


It's easy to talk yourself out of doing things when you know a little too much. Sometimes, it's good to get back into the mode where you knew nothing and do things for their own sake, just to get the engine started again.


Do you think LLMs make this process of starting the engine easier or harder? They make getting started much easier, but it might be harder to feel a sense of momentum since our expectations of speed have changed, and the learning moments have changed as well.


The bug is in the software in our heads, if anything. We learned a little too much, that we're thinking further ahead than we would have when we first started out. So you need to purposefully shut off that part of your eval, so that you get started on anything at all.

If you design with the LLM, then it can make this easier by prompting it to help you not talk yourself out of things.

I found that gstack's /office_hours to be good about encouraging, while being firm. I've only done one of the modes, but it didn't dismiss my pushback when it was just based on my intuition. It took it as a baseline, and tried to evaluate it by taking it seriously. If that's any indication, the other modes for side projects should be just as supportive.

I think LLMs can make it easier to be more ambitious. Non-techies are blown away by being able to build web pages? I'm blown away that I was able to root my 1st gen Kindle Fire to repurpose it as a remote terminal to ssh into my laptop to talk to claude code. I've been trying to root the thing for years and could never find the right instructions to make it work.


The quip I keep going back to is: "All joy, no fun."


I didn't find llms.txt useless at all. I was able to download all the library docs and check it into my repo and point my coding agent to it all the time.


I kept thinking that he'd eventually compare it to writing software by hand, and how we're at the end of one golden age. But he never did. So I wonder what the impetus for the essay was.


About a quarter in I figured out, if there was a point to the article he should have gotten to it already, if it's taking this long maybe he just wants to write about watches. So I skimmed and I was mostly right, the way it's presented I think you can probably draw comparisons to a few other things, not just writing software, but it's more of an optional exercise for the reader.


Ditto. I kept waiting for the AI comparison. My interpretation was less agentic coding than the commodification of LLMs, forcing Anthropic and OpenAI into a pivot to focus on brand. Anthropic's spat with the DoD could be viewed through that lens: losing money on a deal to better position the brand.


There was even a post on HN yesterday making a similar comparison of LLMs and the impact of quartz in watchmaking.

I think the comparison is warranted, and human-crafted code will perhaps become a brand differentiator in the future too.


Isn't that what happens when they post their projects on HN?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: