Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is not. They went about 5 years without one of these, and had a handful over the last 6 months. They're really going to need to figure out what's going wrong and clean up shop.


Engineers have been vibe coding a lot recently...


The featured blog post where one of their senior engineering PMs presented an allegedly "production grade" Matrix implementation, in which authentication was stubbed out as a TODO, says it all really. I'm glad a quarter of the internet is in such responsible hands.


It's spreading and only going to get worse.

Management thinks AI tools should make everyone 10x as productive, so they're all trying to run lean teams and load up the remaining engineers with all the work. This will end about as well as the great offshoring of the early 2000s.


there was also a post here where an engineer was parading around a vibe-coded oauth library he'd made as a demonstration of how great LLMs were

at which point the CVEs started to fly in


Matrix doesn't actually define how one should do authentication though... every homeserver software is free to implement it however they want.


the main bit of auth which was left unimplemented on matrix-workers was the critical logic which authorizes traffic over federation: https://spec.matrix.org/latest/server-server-api/#authorizat...

Auth for clients is also specified in the spec - there is some scope for homeservers to freestyle, but nowadays they have to implement OIDC: https://spec.matrix.org/latest/client-server-api/#client-aut...


Thats a classic claude move, even the new sonnet 4.6 still does this.


It’s almost as classic as just short circuiting tests in lightly obfuscated ways.

I could be quite the kernel developer if making the test green was the only criteria.


Wait till you get AI to write unit tests and tell it the test must pass. After a few rounds it will make the test “assert(true)” when the code cant get the test to pass


No joke. In my company we "sabotaged" the AI initiative led by the CTO. We used LLMs to deliver features as requested by the CTO, but we introduced a couple of bugs here and there (intentionally). As a result, the quarter ended up with more time allocated to fix bugs and tons of customer claims. The CTO is now undoing his initiative. We all have now some time more to keep our jobs.


Thats actively malicious. I understand not going out of your way to catch the LLMs' bugs so as to show the folly of the initiative, but actively sabotaging it is legitimately dangerous behavior. Its acting in bad faith. And i say this as someone who would mostly oppose such an initiative myself

I would go so far as to say that you shouldnt be employed in the industry. Malicious actors like you will contribute to an erosion of trust thatll make everything worse


Might be but sometimes you don’t have another choice when employers are enforcing AIs which have no „feeling“ for context of all business processes involved created by human workers in the years before. Those who spent a lot of love and energy for them mostly. And who are now forced to work against an inferior but overpowered workforce.

Don’t stop sabotaging AI efforts.


Honestly i kinda like the aesthetic of cyberanarchism, but its not for me. It erodes trust


Forcing developers to use unsafe LLM tools is also malicious. This is completely ethical to me. Not commenting on legality.


I dont like it either but its not malicious. The LLM isnt accessing your homeserver, its accessing corporate information. Your employer can order you to be reckless with their information, thats not malicious, its not your information. You should CYA and not do anything illegal even if your asked. But using LLMs isnt illegal. This is bad faith argument


You're talking about legality again. I'm talking about ethics.

Using LLMs for software development is a safety hazard. It also has a societal risk, because it centralizes more data, more power, more money to tech oligarchs.

It's ethical to fight this. Still not commenting on legality.


You're not forced to work there and use those tools. If you don't like it, then leave the job. Intentionally breaking things is unethical especially when you're receiving a paycheck to do the opposite.


It may be illegal, but it's not unethical.

Doing unethical things because someone pays you would still be unethical. Opposing those while someone pays you is still ethical.


Again, no one is forcing him to be there. He's breaking something on purpose. I think you should read up on ethics because this take "I don't like it therefore whatever I do is ethical" is juvenile.


That's quite the strawman. The reason it's ethical is not that LLM's are unpopular or someone dislikes them. It's ethical because LLMs introduce safety hazards, i.e. they cause harm.


Sounds like what an LLM would post if it were tasked to advertise LLM coding abilities. Nice manipulation of human emotions, well played.


I see someone is not familiar with the joys of the current job market.


That's extremely unethical. You're being paid to do something and you deliberately broke it which not only cost your employer additional time and money, but it also cost your customers time and money. If I were you, I'd probably just quit and find another profession.


That's not "sabotaged", that's sabotaged, if you intentionally introduced the bugs. Be very careful admitting something like that publicly unless you're absolutely completely sure nobody could map your HN username to your real identity.


Typo: "shop", should have been with an 'el'.

(: phonetically, because 'l's are hard to read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: