Hacker Newsnew | past | comments | ask | show | jobs | submit | codechicago277's commentslogin

Not if this is crying wolf and causing those same people to ignore the very real security risks with using OpenClaw.

How is 20% of users getting pwned ”crying wolf” by any reasonable measure? This is a zero authentication admin access vulnerability.

Because 20% is not “probably got hacked” and overstates the problem for most users.

That doesn’t mean this isn’t a critical vulnerability, and I think it’s insane to run OpenClaw in its current state. But the current headline will burn your credibility, because 80% of users will be fine with no action, and they’ll take future security issues less seriously as a result.


All the numbers you are using appear to be made up by the reddit poster. I say that as they provided no citation to them (for all I know they got them from an AI). I attempted to verify any of the numbers he used and could not. By exaggerating the numbers he is crying wolf.

Well the post was removed so it doesn’t lend a lot of support to their claims.

HN posters are famously overconfident, sure, but wealth is a bad measure of success. Putin is one of the richest people on earth, but responsible for extreme political repression and global instability. Pablo Escobar did very well financially. Financial success says how well you’ve extracted wealth from others, and approximately zero about your contributions to society.

Einstein, Gandhi, Mandela, Martin Luther King Jr, Orwell had tremendous public impact and “success”, with relatively little wealth to show for it.

Wealth gives those with shallow sense of values an easy scoreboard to look down on others, which is how you get disasters like Sam Bankman-Fried’s failed attempt at “effective altruism”, or almost-trillionaires like Musk gutting the federal government, while extracting billions in public funding and subsidies.


There's always going to be outliers, but there is this general premise that guys like Marc, Elon, and Bezos are failures in spite of their wealth. There is no way they Forrest Gump'd their way into that money.

In fact, there is a bizarre visceral hatred for all the old Netscape guys here—Marc, Brendan, and Jamie—who in particular probably hates this place back even more, even though they are directly responsible for 95% of HN posters having jobs today.


> wealth is a bad measure of[...]your contributions to society

To be _abundantly_ clear, I agree with you and your assumptions here - but, please note that you are making some assumptions here about what "success" is defined as, which might explain why other people disagree.


Sure, but with that definition parent’s comment becomes “wealth is a good indicator of wealth”, which while true certainly isn’t useful.

I’m assuming they meant to imply wealth is a measure of positive social impact, which is a bad measure for the reasons I stated. They also might mean it as a proxy for “rightness”, whatever that is, which is even more of a problem but for different reasons.


The problem is that these visible errors make us wonder what other errors in the post are less visible. Fixing them doesn’t fix the process that led to them.

I'm pretty sure it's AI.

https://x.com/JustJake/status/2007730898192744751

I wouldn't be surprised if most of Railway's infra is running on Claude at this point.


The CEO says it's not: https://x.com/JustJake/status/2038799619640250864

A lot of people are confident in enough in their ability to spot AI infra that they are willing to dismiss a firsthand source on this, and I admit I have no idea why. There isn't any upside to making this claim, and anyway, I assure you that people need no help at all from AI to make these kinds of mistakes.


Their reply doesn't make much sense, they're supposedly soc2 compliant. How are they compliant but letting a single engineer push out a change like that?

I'm sure Claude didn't literally ship the feature itself with no oversight, but I also find it hard to believe that their approach to adopting AI didn't factor in at all. Even just like, the mental overhead of moving faster and adopting AI code with less stringent review leading to an increase in codebase complexity could cause it. Couple that with an AI hallucinating an answer to the engineer who shipped this change, I'm not sure why people are so quick to discount this as a potential source of the issue. Surely none of us want our infra to become less secure and reliable, and so part of preventing that from happening is being honest about the challenges of integrating AI into our development processes.


> I'm not sure why people are so quick to discount [AI] as a potential source of the issue.

Because (per the link above) the CEO said that (1) it was their fault, and (2) it had nothing to do with AI.

I understand that on this forum statements like this are inevitably greeted with some amount of skepticism, but right now I'm seeing no particular reason to disbelieve Jake, and the reason that "if they did use AI they'd deny it" should frankly not be considered good enough to fly around here. Like probably everyone in this comment section I'm open to evidence that they used AI to slop-incident themselves, but until we can reach that standard let's please calm down and focus on what we actually know to be true.


During this whole incident, Railway have made a wide range of misleading and straight out false claims to cover themselves, so them saying it wasn't AI is pretty much meaningless

So on the one hand you have a direct statement from the source that the cause of this incident is humans. On the other hand, while we all agree there is no specific evidence that AI caused the issue, the guy who made that statement, like, really loves AI.

In my life I have gone back and forth on the idea that 12 angry men is a kind of facile representation of how people think and what kinds of evidence really form the basis of a reasonable society. This comment section is doing a really good job of stretching my resolve to believe we are getting at least better.


Would you mind pointing out these claims? Happy to address them personally

Come on man, their CEO is a massive vibe coding proponent and his company spent $300,000 on Claude this month. But yeah, I'm sure Claude had nothing to do with any of it. I bet they don't use it to write any code.

https://xcancel.com/JustJake/status/2030063630709096483#m


Both things can be true: they’re doing a lot of vibe coding, and this was a human error that didn’t involve AI.

I have no skin in the game but that is a very charitable perspective.

It's fine they use AI, it's not fine they don't proofread things.

Not very popular to admit LLMs have uses, I’ve used it to recommend similar movies or books to ones I like.

This is peak human to human sharing recommendations.


"""I have an idea for a movie club, where two movies with a tenuously connected theme are watched (separately) and then discussed. If you've seen the movies "XXX", and "YYY", tell me what is similar about them, what's different, what are some possible "connected themes" and who tackled the topic better?"""

...time passes...

"""Now that you understand the idea behind these pairings, recommend five more pairings, but don't give any hints as to their connections, just five bullet points with "A vs B" movie titles. Bonus points if there is at least a 10-year gap between them, and they are both not box-office blockbusters (but make sure they are slightly more popular or recognizable movies, not exclusively low-distribution non-critically-acclaimed indie movies)."""

* Children of Men vs Snowpiercer * Lost in Translation vs Frances Ha * No Country for Old Men vs Hell or High Water * The Prestige vs The Illusionist * Drive vs Nightcrawler

...I know guidance is "don't just post AI output", but this is specifically a human-to-human discussion around novel(?) ways to interact with AI/LLM's. I've found they're _really_ good at conceptual-venn-diagrams.

There's a book "Algorithms to Live By" (ie: look for matching socks via BFS/DFS or whatever). Asking the AI: "you know a bunch of algorithms, what are the top three that should have been in the book?" => "what are the weakest that could have been removed?"

Recently during performance reviews, we had to write our self-assessment and had guidance from on high like: "make sure you talk about people skills, technical skills, customer impact, etc." ...so yada yada: "I'm so amazing, I'm so great" => "Dear AI, I've been given this guidance `...`, please compare my handcrafted storytelling against the guidance `...` and tell me where I have missed covering a requirement" => "...now please give help w.r.t. simplifying or cleaning up the section on $INCREDIBLE_TECHNICAL_ACHIEVEMENT b/c I was focusing on describing my personal impact, but need help making it more digestible for others".

The combination of instant, tailored feedback and the fact that they've read the whole internet, "watched" every movie (read the script, read critics reviews, reddit, forum discussions, etc), read most published books, and that they're 80%+ plumbers, doctors, lawyers, car mechanics, etc. make them an unstoppable research assistant, especially when crossing connections that would normally be "expensive" to do so.

Example: ask a [doctor+lawyer+plumber] about the health and legal impacts of lead solder in pipes or whatever. Instead of needing to schedule 3 people's times, wait for them, pay them, etc, you can get instant "free" feedback, educate yourself, and then have a more solid foundation to branch out from there. Such incredibly useful tools!


I disagree, this looks like the first signs that mass producing AI code without understanding hits a bottleneck at human systems. These open source responses have been necessary because of the volume of low quality contributions. It’ll be interesting to watch the ideas develop, because I agree that AI is here to stay.


I wonder if this could be used for prompt injection, if you copy and paste the seemingly empty string into an LLM does it understand? Maybe the affect Unicode characters aren’t tokenized.


There's at least one paper (though pretty recent) about it: https://arxiv.org/html/2603.00164v1


Yes, and that happens.


Picked up a vibe, but couldn’t confirm it until the last paragraph, but yeah clearly drafted with at least major AI help.


Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.

https://simonwillison.net/guides/agentic-engineering-pattern...

Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.

"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."

Give me a fucking break


Your reaction is worse than the article. There's no way you could know for sure what their writing process was, but that doesn't stop you from making overconfident claims.


I’m sorry but no attempt was made here. It contains all the red flags in the first few paragraphs.


Sorry but seems like most people don't care or even like AI writing more:

https://x.com/kevinroose/status/2031397522590282212


That's the problem with AI writing in a nutshell. In a blind, relatively short comparison (similarly used for RLHF), AI writing has a florid, punchy quality that intuitively feels like high quality writing.

But then after you read the exact same structure a dozen times a day on the web, it becomes like nails on the chalkboard. It's a combination of "too much of a good thing" with little variation throughout a long piece of prose, and basic pattern recognition of AI output from a model coalescing to a consistent style that can be spotted as if 1-3 human ghost writers wrote 1/4 of the content on the web.


One thing I've learned recently is a lot guys (like here) have been out here reading each word of a given company's tech blog, closely parsing each sentence construction.. I really cant imagine being even concious of the prose for something like this. A corporate blog, to me, has some base level of banality to it. It's like reading a cereal box and getting angry at the lack of nuance.

Like who cares? Is there really some nostalgia for a time before this? When reading some press release from a cybersecurity company was akin to Joyce or Nabakov or whatever? (Maybe Hemingway...)

We really gotta be picking our battles here imo, and this doesn't feel like a high priority target. Let companies be the weird inhuman things that they are.

Read a novel! They are great, I promise. Then when you read other stuff, maybe you won't feel so angry?


I've picked up reading again over the last year or so! Maybe, if anything, that is why I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.

What makes me angry, is to use the feelings we associate with this process and disingenuously pretend that there is a human that wants to tell me something, just for it to be generated drivel.

Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.

Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.

ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.


Ah well I guess you are on the right side of this either way! No need to even explain. It seems that people really really do care, and its wrong to say maybe its ok that they don't have to in this case. I guess I get it, I am generally more wrong the right anyway, and yes, at the very least, I am clearly in some way sub literate and uncritical as a reader, who can't tell the difference anyway. Not really the guy to be giving his opinion here. I will go find some slop to enjoy while the adults figure out the important stuff! Thanks for teaching me the lesson here.


> Why this matters

Hello Gemini


A vibe? It’s completely obvious AI slop with no attempt to make it legible. They didn’t even prompt out the emdashes. For such a cool finding this is extremely disappointing.


I remember Blockstack from the first crypto bull run. It was the product I thought had the most potential after Ethereum, building off of tools like Sia. The white papers [1] laid out a technical plan for fully building an internet on blockchain. It felt like the “decentralized internet” from Silicon Valley (the show), and must have inspired that plot point.

For an industry that was full of hype and fake products, it was one of the few you could download and get some use out of. I remember a very janky Google Docs clone running on the chain. Sad to see that they’ve lost their way. For now crypto still only has one value prop: token go up.

[1] https://cs.brown.edu/courses/csci2390/2019/readings/blocksta...


Author here. I felt the same way when I joined. Gaia and the decentralized identity work were genuinely interesting, and the early developer community was real, with people building things and giving honest feedback. That's what made the shift frustrating to watch from the inside.

I'd refine "lost their way" slightly: the incentive structure was always going to produce this outcome once the token became the primary funding mechanism. The team didn't lose direction; the path was bent by the economics.


Google Glass looked dorky, Meta Ray-Bans look cool.


There’s no possibility or need for morality to be universal, and societies have improved their ethics many times throughout history. Your take is nihilistic and presupposes that moral progress isn’t possible, even though we’ve seen objective moral progress many times.


Morals / ethics change of course. However that is not objective progress, only subjuctive. You think it is objective because you agree with the new system. Slave owners of the past would call it a regression that they can't live their lifestyle. Of course I agree with the new standards (at least here) and so am glad they can't.

edit: yes, nillistic - but sometimes you have to go there


Just because it’s subjective doesn’t mean it’s incorrect. The slave holders were wrong, you and I are right. Less human sacrifice in the world is a good thing, and we shouldn’t require a perfect ethical framework before we act ethically, because some real things aren’t reducible to objective logic or perfectly consistent ontologies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: