Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sadly the fast startup time is overshadowed by the slow response time of the codex agent




I don't get it. Even in ChatGPT I always use Pro model with maxed-out thinking budget (selectors available only on web and windows).

Let them cook as much as they can.


Looks like there're two main approaches to AI-first development: (i) favor slow responses to produce an upfront high-quality result, (ii) favor quick responses to enable faster response-test-query iteration. And, based on comments read here, seems Codex isn't too fit for the later. Optimally a developer should be able switch between the two approaches depending on problem at hand.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: