Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no reason the app itself couldn't string together those composable steps into an action performed when the user invokes it. OP's point is there is that neither an LLM or a voice layer is really required, unless you're deliberately aiming to frustrate the user by adding extra steps (chat, phone call). Customer intent can be determined with good UX.


its the opposite, majority of users prefer to get support via chat or phone

navigating ux is still difficult in 2026

the average hn user is leagues above what the average customer or even smb knows about tech and ux, just not realistic for them to redesign their apis




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: