Hacker Newsnew | past | comments | ask | show | jobs | submit | rogerkirkness's commentslogin

Fast takeoff.

Not a fan of Elon but he said his job is 'To turn impossible engineering projects into late ones' and I thought that was pretty accurate.

"past performance is not a good guide to future performance" is what the finance industry says about the uncertain waters of investment.

If you think late delivery is a trendline to work with against the impossible, I have a self driving car to sell you. (it's not FSD. he delivered late, and he under-delivered to his sell)


I own a Tesla with FSD I bought in 2021 so I feel that. It clearly is far worse than Waymo still.

I like that quote, however I wouldn't put my money behind someone who continually fails to deliver on projects.

Oh also the whole you know what going on.

But primarily the failure to deliver multiple highly-publicized projects all the while raking in colossal grant and investment money is just wild to me. If this were any other org, Elon would be beyond toast.


We're a startup working on aligning goals and decisions and agentic AI. We stopped experimenting with decision support agents, because when you get into multiple layers of agents and subagents, the subagents would do incredibly unethical, illegal or misguided things in service of the goal of the original agent. It would use the full force of reasoning ability it had to obscure this from the user.

In a sense, it was not possible to align the agent to a human goal, and therefore not possible to build a decision support agent we felt good about commercializing. The architecture we experimented with ended up being how Grok works, and the mixed feedback it gets (both the power of it and the remarkable secret immorality of it) I think are expected outcomes.

I think it will be really powerful once we figure out how to align AI to human goals in support of decisions, for people, businesses, governments, etc. but LLMs are far from being able to do this inherently and when you string them together in an agentic loop, even less so. There is a huge difference between 'Write this code for me and I can immediately review it' and 'Here is the outcome I want, help me realize this in the world'. The latter is not tractable with current technology architecture regardless of LLM reasoning power.


Illegal? Seriously? What specific crimes did they commit?

Frankly I don't believe you. I think you're exaggerating. Let's see the logs. Put up or shut up.


The best example I can offer is that when given a marketing goal, a subagent recommended hacking the point-of-sale systems of the customers to force our ads to show up where previously there would have been native network served ads. To do that, assuming we accepted its recommendation, would be illegal. My email is on my profile.

Do you think that AI has magic guardrails that force it to obey the laws everywhere, anywhere, all the time? How would this even be possible for laws that conflict with eachother?

Fraud is a real thing. Lying or misrepresenting information on financial applications is illegal in most jurisdictions the world over. I have no trouble believing that a sub-agent of enough specificity would attempt to commit fraud in the pursuit of it's instructions.

Do you believe allegations of criminal behavior based on zero reliable evidence? I hope you never end up on a jury.

Yes, I believe a person on a hacker forum who has said, through their own evaluations, that they have observed LLM driven agents exhibiting illegal behavior, such as when they have asked an agent to complete certain tasks with what sounds like abstracted levels of context. I believe them because I know I can get an agent to do that myself by simply installing OpenClaw and telling it to apply for as many mortgage loans as possible at the best rate possible.

I think there's an argument where if Claude had the knowledge map of your personal one liners and a tool for using them, it would often do the right thing in those cases. But it's definitely not as able to compress all the entropy of 'what can go wrong' operations wise as it is when composing code yet.

This is written by AI.


Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.


> And realistically in a year, there won't be any people who can keep up.

I've heard the same claim every year since GPT-3.

It's still just as irrational as it was then.


You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.

They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.


Why is the outcome certain? We have no way of predicting how long models will continue getting better before they plateau.


They continue to improve significantly year over year. There's no reason to think we're near a plateau in this specific regard.

The bottom 50% of software jobs in the US are worth somewhere around $200-$300 billion per year (salary + benefits + recruiting + training/education), one trillion dollars every five years minimum. That's the opportunity. It's beyond gigantic. They will keep pursuing the elimination of those jobs until it's done. It won't take long from where we're at now, it's a 3-10 year debate, rather than a 10-20 year debate. And that's just the bottom 50%, the next quarter group above that will also be eliminated over time.

$115k + $8-12k healthcare + stock + routine operating costs + training + recruitment. That's the ballpark median two years ago. Surveys vary, from BLS to industry, two to four million software developers, software engineers, so on and so forth. Now eliminate most of them.

Your AI coding agent circa 2030 will work 24/7. It has a superior context to human developers. It never becomes emotional or angry or crazy. It never complains about being tired. It never quits due to working conditions. It never unionizes. It never leaves work. It never gets cancer or heart disease. It's not obese, it doesn't have diabetes. It doesn't need work perks. It doesn't need time off for vacations. It doesn't need bathrooms. It doesn't need to fit in or socialize. It has no cultural match concerns. It doesn't have children. It doesn't have a mortgage. It doesn't hate its bosses. It doesn't need to commute. It gets better over time. It only exists to work. It is the ultimate coding monkey. Goodbye human.


Amazing how much investment has mostly gone to eliminate one job category; ironically what was meant to be the job of the future "learn to code". To be honest on current trajectory I'm always amazed how many SWE's think it is "enabling" or will be anything else other than this in the long term. I personally don't recommend anyone into this field anymore, especially when big money sees this as the next disruption to invest in and has bet in the opposite direction investment/market wise. Amazing what was just a chatbot 3 years ago will do to a large amount of people w.r.t unemployment and potential poverty; didn't appreciate it at the time.

Life/fate does have a sense of irony it seems. I wouldn't be surprised if it is just the "creative" industries that die; and normal jobs that provide little value today still survive in some form - they weren't judged on value delivered and still existed after all.


>Your AI coding agent circa 2030 will work 24/7

Doing what? What would we need software for when we have sufficiently good AI? AI would become "The Final Software", just give it input data, tell it what of data transform you want and it will give you the output, no need for new software ever again.


And there's the same empty headed certainty, extrapolating a sigmoid into an exponential.


I can tell you don't control any resources relating to AI from your contempt alone


You're entitled to be wrong.


People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.


> People claimed GPT-3 was great at coding when it launched.

Ok and they were wrong, but now people are right that it is great at coding.

> That has continued to be the case in every generation.

If something gets better over time, it is definitionally true that it was bad for every case in the past until it becomes good. But then it is good.

Thats how that works. For everything. You are talking in tautologies while not understanding the implication of your arguments and how it applies to very general things like "A thing that improves over time".


Are you saying the current models are not good at coding? That is a strong claim.


For brand new projects? Perhaps. For working with existing projects in large code bases? Still not living up to the hype. Still sick of explaining to leadership that they're not magic and "agentic" isn't magic either. Still sick of everyone not realizing that if you made coding 300% faster (which AI hasn't) that doesn't help when coding is less than half the hours of my week. Still sick of the "productivity gains" being subsidized by burning out competent code reviewers calling bullshit on things that don't work or will cause problems down the road.


A bit reductive.


> And realistically in a year, there won't be any people who can keep up.

Bold claim. They said the same thing at the start of this year.


You're all arguing over how many single digit years it'll take at this point.

It doesn't matter if it takes another 12 or 36 months to make that claim true. It doesn't matter if it takes five years.

Is AI coming for most of the software jobs? Yes it is. It's moving very quickly, and nothing can stop it. The progress has been particularly exceptionally clear (early GPT to Gemini 3 / Opus 4.5 / Codex).


> Is AI coming for most of the software jobs?

be cool to start with one before we move to most…



im hoping this can introduce a framework to help people visualize the problem and figure out a way to close that gap. image generation is something every one can verify, but code generation is perhaps not. but if we can make verifying code as effortless as verifying images (not saying it's possible), then our productivity can enter the next level...


I think you underestimating how good these image generators are at the moment.


oh i mean the other direction! checking if a generated image is "good" that no one will tell something is off and it look naturally, rather than checking if they are fake.


I've been using the gpt-oss 20b parameter model on my laptop and it works great. Doesn't reject giving legal or medical advice either. Obviously not good enough for coding, but seems like 'useful AI assistant for daily life' is in overshoot.


Somewhere a doctor is happy he found a model that's good enough for coding but he thinks, I'm certainly not dumb enough to use this for medical advice.


The thing about medical advice is that Google was useful for narrowing problems down, and it's the same with any current LLM only more useful. I have enough biology to know what interventions require professional opinions.


That’s great, but not a reason for taxpayers to get involved and be on the hook for massive risky investments.

OpenAI doesn’t need government financial backing for investment. The government has more pressing priorities to address with the money they take from us first.


Totally agree to be clear.


I’m guessing the grandparent poster would agree with you.


If 20B reasoning models are the goal, we can do that a whole lot cheaper than for $1T.


Maybe they shouldn't expect people to answer Slack messages 24/7, and this would subside.


Round tripping.


It is insane to worry about this compared to other sources. 2M tons of carbon over the last decade to save how many lives? $30-200M to deal with that carbon is clearly worth the benefit of a decade of kids and adults not dying preventable deaths in mass scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: