Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They certainly do, and also offer the tooling to the public: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

They also recommend to use it to iterate on your own prompts when using Claude Code for example



By "rigorous" I mean peeking under the curtain and actually quantifying the interactions between different system prompts and model weights.

"Chain of thought" and "reasoning" is marketing bullshit.


How would you quantify it? The LM is still a black box, we don't know what most of those weights actually do.


Yes, that was my question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: