Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I said there is hype in general about the topic.

> I don’t see how this is true. By what metric might “most” lead to a true statement?

It's english, not code.



“Most” in English means more than half. And there is an underlying meaning, stated or implied, by which we can figure out if we are in the “most” category. So, to restate, by what meaning is >50% of the article the result of too high expectations? Please show us some detail; don’t just tell us that it is that way.

The best I can puzzle together is that you are suggesting “there is AI hype” and (hand waving) this undermines the article (somehow). I don’t see it. You don’t explain specific weaknesses about the article itself other than it is less detailed that you would like. You said you demand depth, so it would seem fair to respond with at least as much detail as the author provided.

We are at the point where saying “AI is overhyped” is very overloaded, making its information value low or even negative. Instead, I suggest being specific. For example, if you think OpenAI has made [specific] investments or product choices that are unlikely to materialize over a quarter or a year, say that and explain why. My hope here is to learn and reduce the chances of talking past another person.

Even if I were to use the vague language of saying “we are in a hype bubble” (which I don’t because I think it is not useful nor clear), most examples of hype are associated with important and substantive changes or lessons to learn.

So when an author writes up their lessons learned about making agents work better, I find it a bit low-effort to see someone else dismiss that article without any detail at all.

To move this forward, what do you think the author got wrong about agentic AI, specifically?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: