Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm one of the "dislikers" although the neural network stuff is itself an amazing tool in my opinion. I like to fall back on a much easier argument (IANAL and this is not legal advice), can these code generating things generate code without reading (training on) encumbered code?

Humans can learn syntax and basic programs then independent of any "similar code", humans can produce new algorithms that solve specific problems. Now sure, similar code can be searched for on the internet but the code is "attributed" and will likely contain a license. If the human copies it too closely, attribution and licensing rights come into play. The LLMs apparently just bail on attribution.

The way LLMs are trained is that they are fed an absurd amount of code, humans cannot train this way because the volumes of code to be read are too great.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: