Unfortunately I'm already seeing a lot of this at the consulting gigs I do. When I review in-house code that's clearly written using an LLM and ask the developer behind it questions, they can't even rationalize basic ideas about what's going on. LLM goes brr, it looks about right if you don't test any edge cases, and into the commit it goes. I suspect this phenomenon is way more widely spread than currently believed. Personally I don't worry about developers' job security prospects, being able to grok why a complex piece of software doesn't work has become more valuable in this context, not less.
> Personally I don't worry about developers' job security prospects, being able to grok why a complex piece of software doesn't work has become more valuable in this context, not less.
Any programmer worth their salt could have said, and did say, that these tools are incomplete and insufficient, but unfortunately the ones who have the decision making power over such programmers’ “job security” are unable to see how their even knowing what a transistor is will assuage the shareholder’s insatiable need for “growth at all costs!”.
Developers job security should be unchanged but this snake oil is almost tailor made to strip the tech ignorant of all rational faculties.
If and when they do start to ask you back: demand ownership of your work, and triple the salary.