Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious about specific consequences of this. I tend to think the importance of code secrecy has always been exaggerated (there are specific exceptions like hedge fund strategies and malware), even more so now in this post-Claude world. Does anyone have specific things they're trying to avoid by opting out of this?


Algorithms and models for a proprietary trading system? My personal notes? The latex text of my phd thesis?

I will go screaming and kicking and fighting into this dystopian nightmare post-privacy shithole world that so many people seem fine with. If I have to move off of every service or technology to maintain some semblance of privacy so be it.


Well, mostly I was thinking about code, and aside from the specific exceptions of trading algorithms (which I was trying to get at when I said hedge fund strategies), and now PhD theses (good point, at least if you're talking pre-publication), I'm still having trouble understanding the threat model even if AI did train on most proprietary, private business code. Can AI training on a CRUD app's code damage a business?

And I have the same question about private notes, or even a diary. Can an AI training on a bunch of personal stuff damage the person that wrote it?

Do you really keep trading algorithms on github?


Well, depends on what you have in those private notes and how others will query the LLMs trained on that private data. Maybe you write things in private notes that are a reason for private notes to remain private.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: