Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems like a marketing piece, very poorly justified.

First, large context models essentially index their context as it grows bigger, or else they can't access the relevant parts of it. However it can't be as comprehensive as with RAG. There is also nothing that makes navigating the context from point to point easier than with RAG.

It seems they're trying to convince people of their superiority, but it's BS, so they're trying to bank on less knowledgeable customers.

Indexing is essentially a sorted projection of a larger space, based on the traits and context you care about. There's no magical way for a context to be more accessible, if it has no such semantical indexing, implicit or explicit. Also RAG doesn't mean you can't embed AST and file structure as a concern. A vector is a set of dimensions, a dimension can be literally anything at all. AI is about finding suitable meaning for each dimension and embedding instances in that dimension (and others in combo).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: