Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How does AI impact my job as a programmer? (chelseatroy.com)
47 points by thm on May 28, 2024 | hide | past | favorite | 39 comments


Unfortunately I'm already seeing a lot of this at the consulting gigs I do. When I review in-house code that's clearly written using an LLM and ask the developer behind it questions, they can't even rationalize basic ideas about what's going on. LLM goes brr, it looks about right if you don't test any edge cases, and into the commit it goes. I suspect this phenomenon is way more widely spread than currently believed. Personally I don't worry about developers' job security prospects, being able to grok why a complex piece of software doesn't work has become more valuable in this context, not less.


> Personally I don't worry about developers' job security prospects, being able to grok why a complex piece of software doesn't work has become more valuable in this context, not less.

Any programmer worth their salt could have said, and did say, that these tools are incomplete and insufficient, but unfortunately the ones who have the decision making power over such programmers’ “job security” are unable to see how their even knowing what a transistor is will assuage the shareholder’s insatiable need for “growth at all costs!”.

Developers job security should be unchanged but this snake oil is almost tailor made to strip the tech ignorant of all rational faculties.

If and when they do start to ask you back: demand ownership of your work, and triple the salary.


For some programmers creating code is a write-only activity, and LLM is only building on this attitude.


From the article:

"Our relative lack of skill at investigation becomes clear when we look at the accuracy rate of StackOverflow answers. For the amount of sass you see on that platform, you’d expect the programmers to at least be right. Except they aren’t. We have whole jokes about this too. Again, this is what was used to train the LLMs. Models trained on human data can’t outperform the base error rate in that data."

I think this is an important metric to consider. These LLMs are being trained on data sets with bad results and bad code with no real way to tell the difference. Only someone experienced in this field can read a SO article and appreciate the quality of the responses. An LLM is just summarizing bad information.


An LLM is just summarizing bad information.

In other words, there is no real *intelligence* involved at all.

*Intelligence* has yet to be accurately defined --- and there is no reason to believe it is a statistical function.


When I was a kid — and not just as a kid, this was still the case 10 years ago — summarising text was considered "AI-complete" (also known as "AI-hard").

In fact, at time of writing, "natural language understanding" is still on the Wikipedia page for AI-complete.

> Intelligence has yet to be accurately defined --- and there is no reason to believe it is a statistical function

Other than the work of Rev. Bayes, Prof. Schrödinger. Plus, now I think about it, saying "randomness can't lead to intelligence" is the talking point for all the people who insist Darwin is wrong and there has to be an intelligent designer.


What LLMs do and what Evolution does are not the same thing.

Evolution is a process where small changes over a long time create new, more complex organisms. LLMs are not that. LLMs are not evolving themselves, giving themselves new abilities. Just adding more variables to a long list of variables is not the same thing.


saying "randomness can't lead to intelligence" is the talking point

Given enough time, a monkey randomly typing on a keyboard could hypothetically produce Shakespeare.

But talking points aside, this does not make the monkey more "intelligent" nor does it mean that "randomness" is a practical methodology for achieving specific results on a human timescale.

Particularly when the desired result is as complex and poorly understood as "intelligence".


I'm describing evolution itself as the intelligence here, not the monkey. And the million typewriters taking ages is why Creationists argue it can't work (having not understood half of it).

> But talking points aside, this does not make the monkey more "intelligent" nor does it mean that "randomness" is a practical methodology for achieving specific results on a human timescale.

Not by itself. Evolution is more than just random, it's a specific filter on top of randomness, just like all the other things. And computers roll the dice very quickly compared to reproduction, which is why neither you nor any other human can win a chess game against the best computer any more — it did it on a human timescale.


An LLM is just summarizing bad information.

You cannot make this claim without evidence that the LLM primarily used poor quality SO articles. You don’t include an assessment of all of the other sources. Other sources could include properly functioning open source code, for example.


You missed important context there. In particular, "These LLMs are being trained on data sets with bad results and bad code with no real way to tell the difference."

A couple dozen bad SO articles can easily poison the results of thousands of examples of good OSS code. Code rarely has prose associated with it. SO articles have prose, so these articles will be disproportionately considered as the LLM is self-organizing.

So, it's not necessary that the LLM be trained primarily on bad SO articles for it to have a disproportionate impact on the results it generates from prose prompts.


When I train a CNN the scale of errors is a very important characteristic of the training set so I empirically don’t understand this idea of “poisoning” with “a couple of dozen bad SO articles”.

Do you have any sources related to this disproportionate impact?


"On the Dangers of Stochastic Parrots" (doi:10.1145/3442188.3445922) is a great introduction to this, even with its flaws.

I also recommend Mitchell 2023 (doi:10.1073/pnas.2215907120) and Niven 2019 (arXiv:1907.07355) as good starting points.

These don't directly address your question, but within the context of these papers, it's possible to see how the weighting of prose (which is a relation to prompts) and code can become skewed easily.


“Models trained on human data can’t outperform the base error rate in that data.”

That doesn’t seem inherently/unavoidably true. Surely a human could read a bunch of SO answers, try them, and thereby outperform the base error rate of the answers.

I don’t see it as impossible for an AI model to exist that does the same.


Such a model has not yet been invented. Certainly, a generative model such as a large language model is unlikely to gain the ability to reason about these answers.

LLMs are a step on the path toward better human/computer interaction, but they are just a step. Like most new things in the AI field, they have been over-hyped and have moved the market in a giant game of tech FOMO.


a giant game of tech FOMO.

For me, the most distressing part is how easily tech people drink the FOMO and then go to work promoting it. This happens over and over again.


Because a human can comprehend, and an LLM cannot.


I don’t see it as impossible for an AI model to exist that does the same.

The important step is creating an AI model with enough cognition to recognize it needs to do the same before spouting nonsense.


Some malicious folks concerned about stability of their work may decide to poison training data... not an easy task for sure, but not entirely impossible in their niche area of expertise.

Like literally all powerful tools in the past, ai abuse will come from various directions for various unexpected reasons.


>These LLMs are being trained on data sets with bad results and bad code with no real way to tell the difference.

No shit. I lost it when Google AI started recommending people to refill their blinker fluid, which is a meme that's sarcastically been parroted all over the internet for decades but Google's AI took it as serious gospel in their training dataset.

The fact of the matter is these are just LLMs, not AI, they're only as good as their dataset. And since the entire internet is their dataset, along with all the human trolling, sarcasm and shitposting, and without any logic or intuition to distinguish what is sarcasm and what is not, their answers are gonna be questionably acurate a lot of the times when it comes to generic info that every average joe can have an opinion on.

There's no actual intelligence there to call it an AI, it's just a summary of the entire internet, which is definitely useful for some task, but not at replacing human tasks.

I saw some people on HN defended Google AI's shitty answers as being very useful for humor and not for real answers, but then that would be the world's most expensive joke machine for a company who's main business is search and thrived as the go-to for web answers. You don't need to delegate 3% of the world's energy, or whatever it is, towards generating only wrong answers that sound funny.


At this point I adamantly refuse to call it AI one single more time, because it is not, it's LLM's, it's generative models, it's pretrained transformers. AI as a term belies an intelligence in the product in your average person (and many tech people it seems) that is simply not. Fucking. There. LLMs do not understand, at all, what they are saying. They are simply generating words one at a time to minimize "error," whatever that means in the context of the model.


So let's put "AI" on the same shelf with "self driving" at least for the time being.


I call them generators, I hope that term takes off because that’s what they really are


Except it's not even summarizing, it's generating new and creative ways to be wrong.


> If you wanted to make a program, what you did was start by writing some code from scratch. You used about 8 concepts—strings, integers, floats, variables, conditions, loops, functions, modules2—to make your thing, like baking bread from raw ingredients.

Somehow key part of IO is missing in this description. For your system to work you have to fit into API/constrains of the system within which yours runs.

> Are large language models gonna cause programmers to lose their jobs? Not anymore than StackOverflow did, in my view. However, it’s going to change them…somewhat.

Disagree.

That will holf true till someone invents 8 concepts using which one can create a whole app just from it's conceptual description. That person would not loose their job, those 9 devs who would have to implement that app otherwise without this appeoach - will.

I forcesee future where 1 team with the right tools replaces today's 10 people. So some will loose their jobs, some will secure them more.

> But like an IDE, or a framework, or a test harness, utility here requires skill on the part of the operator—and not just ChatGPT jockeying skill: programming skill. Existing subject matter expertise.

Agree. You can't estimate quality of the response of the LLM if you don't speak the language they reply at least (or understand nets of concepts hiding behind the words).

---

Also I did not like this passage. The book to my experience is quite useful and is not worthy mentioning in such context.

> First of all, that culture has its infiltrators. We got self-styled prophets writing whole books about their personal philosophies and slapping general-purpose-sounding names on them like “Clean Code.” It reminds me of the dudes who ran around the Middle East 2,000 years ago claiming they personally could introduce you to G-d.

Looks like it's personal attitude towards the author or their work.

---

Other than that it's good solid writing. I enjoyed it.


Multi startup CTO/CPO (including one yc, and one growth stage high growth). So AI has been interesting for me across the three businesses I am working on. For two of them, it has enabled us to do things which were not possible at high fidelity before (eg. unstructured data entry). For the third growth stage place, it is allowing us to increase the net margin as we scale by reducing the number of operators we need to hire.

I see the biggest impact in more traditional startups which involve a lot of manual labour. For us, LLMs are letting us scale much faster than we could before.

For the dev teams I manage, LLMs provide a lot of shortcuts whereas before we spent extensive time hand tuning parsers and simple automations.


Of course, the industry has a long history of automating and abstracting tasks. Usually there's plenty more work to be done but it does tend to mean that a lot of low-level tasks get devalued over time and, if your value is mostly a willingness to roll up your sleeves and do a lot of low-value grunt work, well...

Needs also change. Someone somewhere is probably still hiring x86 assembly language programmers for embedded work but they're probably not the highest-paid jobs and you'd likely better be especially expert. And some skills and knowledge just don't matter much any more--at least if they haven't been transferred to newer domains where they can still be applied.Probably, no one want to hire the world's leading consultant on some specific obsolete proprietary architecture.


This is not related to the article, but as a multi startup CTO/CPO, how do you explain it to yourself when one or more of your companies fail when you know that, by working in multi startups, your attention and resources were divided?


I work alot of hours, and aim to solve organizational issues at the root cause with systematic solutions. This way, I minimize hands on management time.


My God, I can practically taste the salt emanating from this post.


I've seen plenty of these essays, and colleagues bashing LLMs online, making fun on mistakes the AI makes, sharing articles about energy usage, etc.

I use it to help me write code in languages never used before, to manage infrastructure, to summarize manuals, to explain me stuff, and what not. The savings of time and the productivity boost are huge and converts into real income and better understanding of various concepts.

Are the LLM perfect - of course no. Human programmers aren't perfect too, yet they hire them and pay them huge salaries. AI will change a lot of the business and writing essays and making fun on it online won't stop it at all.


Fundamentally, it seems inevitable to me that AIs will outperform humans at programming eventually. Humans are terrible at tasks that involve sitting still for long periods of time manipulating abstract symbols. They are not adapted to auch tasks at all. Our "general reasoning ability" enables us to do surprisingly much but it can only go so far when complexity of systems gets out of hand.

However, I think we'll be fine on this front. Smart and able people will find something to do and the really scary consequences of automation-caused job loss are only seen in regions that are disproportionately affected, such as a production plant closing that employs half of an entire town. Programmers are quite wealthy by comparison, which gives them time and opportunities to adapt, and also usually live in big cities, which offer enough different jobs.

What's really scaring me is the recent papers showing AI has theory-of-mind and persuasion capabilities on par with or even surpassing humans. One would think that's the one thing we're actually evolutionarily adapted for...


Excluding from this evolution the less "smart and able" people, whatever that is, means guaranteed social trouble. And when the pitchforks start running on the streets, living in a big city becomes rather a liability.


I didn't really mean anything by this "exclusion" that is not already the case right now. Programers (just like anyone) who are able to solve various problems are doing well. Those who cannot, and who are not doing well can try doing something else, it's their choice. Basically, I just wanted to avoid claiming that "every single programmer will find a great new job".

I guess there was no reason to put in this caveat. Economic developments have never been perfectly fair and neither will this one, which is obvious enough to not need mentioning. The wealth generated by such automation would allow for good social care systems and I hope we get some in any case. That's assuming we don't wipe ourselves out in the process.


I think that programming is reinventing the wheel as a profession. If it is not then that wheel has not been invented yet or the programmer is trying to fix the wheel invention his predecessor tried to invent properly. If there is about max 3 proper ways to program one part of any given program then I can see how llms could be the solution to all this energy wasted on fixing bumpy wheel reinventions. Which would mean they could also make it worse. But they lack understanding and therefore ego so they are not at fault. The programmer understands and if not can still rely on ego.


It doesn't matter how good the artist thinks his painting is if others think that anyone can do it better, cheaper and faster with the help of Artificial Intelligence. At the end the developers will have to find a profession that will feed'em because the industry considers creating software isn't a high valuable asset of their programmers anymore.


> We got self-styled prophets writing whole books about their personal philosophies and slapping general-purpose-sounding names on them like “Clean Code.” It reminds me of the dudes who ran around the Middle East 2,000 years ago claiming they personally could introduce you to G-d.

Top-tier comedy. When I was in university, not too long ago, all the CS professors were huge fanboys of uncle bob. Lot's of lectures started with a reference to his books, making it seem like a mandatory part of the curriculum.


I always like to say - One day I'm gonna spend an afternoon on the toilet writing a book like this, at the end I'll look at what's left in the bowl and name it after that. I'm hoping for creamy code.


This sounds like a coping strategy with a bit of “trust me bro” sprinkled in. I would hope Rubyists and shitposters don’t form a Venn diagram but here we are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: