Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So it's hard to give a number on how robust it is, but yes, simply telling GPT that "this doesn't work" or "try again" does work a lot of the time. We're trying to give GPT as much context as possible to help it figure out where things went wrong. Currently that means giving it type errors, execution errors, self-critique and user feedback. But we're working on expanding that; for instance we want to experiment with automatically generating tests that validate things are working as they should. We also want to add a GPT based "debugger" that can help you pinpoint exactly what went wrong.


Sounds like a nightmare tbh. Half the fun is the actual coding (aka the journey). I guess if you want to be a prompt engineer that sounds like fun but it sucks all the art of programming out of it.


(I'm the in-house AI engineer at Braindump.)

Yeah, this is definitely something we're considering. The majority of us are programmers and have similar feelings about the joy of programming. The two things I'd say we've discovered along the way are that:

1. People who aren't programmers can build games with this, and that's worthwhile in itself (giving this to a non-programmer and watching them realise an idea they have is delightful!)

2. Working over concepts, not code, allows for much faster iteration and experimentation. I'm a fast programmer, but nothing compares to clicking somewhere, saying "do this", and being able to play around with it within a minute.

We're still navigating the boundary between traditional game making (including programming) and AI-infused game making; I hope we'll find a happy balance that keeps everyone empowered and happy :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: