Technical blogs from infrastructure companies used to serve two purposes: demonstrate expertise and build trust. When the posts start overpromising, you lose both.
I don't know enough about this specific implementation to say whether "implemented Matrix" is accurate or marketing stretch. But the pattern of "we did X" blog posts that turn out to be "we did a demo of part of X" is getting tiresome across the industry.
The fix is boring: just be precise about what you built. "We prototyped a Matrix homeserver on Workers with these limitations" is less exciting but doesn't erode trust.
I raised this point on a previous Cloudflare blog post - they've turned quite vapid these days. If you pay attention, they're stuffed to the brim with generated text which is sloppy and under-opinionated on the audience for the writing in the first place.
Yeah normally the CF blog ranks as one of the best in the world in my book, so a post of lower quality and potentially AI slop really stands out here.
That said I think the concept of a full matrix server running all on CF infrastructure/services is an awesome blog post from CF.
Honestly I wish CF would simply unpublish/retract this blog post, put another engineer on it to help the PM, and spend another couple of weeks polishing the post/code to republish the same blog post.
They can't do that though. If they did, it would make the shareholders and CEOs mad because it would demonstrate that LLMs cannot (yet) deliver on all the promises these CEOs have been claiming for this entire time.
My charitable read on this is that an individual vibe-coded both the post and repository and was able to publish to the Cloudflare blog without it actually being reviewed or vetted. They also are not an engineer and when the agent hallucinated “I have built and tested this and it is production grade,” they took it at face value.
You can tell since the code is in a public repository and not Cloudflare’s, which IMO is the big giveaway that this is a lesson for Cloudflare in having appropriate review processes for public comms and for the individual to avoid making claims they cannot substantiate or verify independently.
This person works for Cloudflare. What else are they "vibe coding?" How long until Cloudflare shuts off half the internet due to a "mistake" again? How much longer are we going to accept that these are mistakes?
I've always found it interesting that these tech infra companies' stock tends to rise in the immediate aftermath of these outages. My best guess is that people see the effect of the outage and say "Hey, this company I've never heard of sure seems to have a lot of customers!"
To be fair I've benefited from that in the past; this is an observation of my own that doesn't represent the views of any of my current or former employers.
The problem is this analysis and the mindset of a shareholder are about as far apart as you can get. The market likes to pretend it is "sophisticated and knowledgeable." It's a slot machine and as long as the handle pullers smell money in the machine they're going to sit there and pull.
I don't know why it being potentially vibe coded or vibe written exonerates the author. "Your job is to deliver code you have proven to work [1]." It is your duty to ensure your work works, no matter what tools you used. You don't get to pass the blame on an AI agent any more than you get to blame intellij autocomplete for your buggy code.
Furthermore, I don't see why we are extending the principle of charity to cloudflare, a billion dollar enterprise controlling a significant part of internet traffic self identifying as a "utility." If cloudflare deserves more of something from us, it is scrutiny and accountability, not charity and deference.
I agree, but it's probably not just about being "able to" do it, but about what the incentives and pressures are in that organization.
Cloudflare apparently considers blog posts to be a key deliverable for many roles. Not just marketing or devrel but engineering too. That sets up a lot of incentives for slop. And then all you need for a disaster is a high trust environment with insufficient controls, which they probably have since the process had worked for a decade without an insufficiently reviewed article blowing up in their face.
Going forward there will be just a little bit less trust, more controls, and more friction that will make it harder to get a post out in a timely manner. It's just the way all organizations evolve. You can see from the scar tissue where problems existed in the past.
What I can't believe is that they haven't retracted the whole post by now, but are allowing the author to make an even bigger mess trying to fix the initial problems.
I'd love to see a root cause analysis post by Cloudflare for this one. The ones they do after outages are always interesting to read.
How did this make it into the blog? What is the review process for these posts and what failed this time? What measures will be taken to restore Cloudflare blog's reputation?
Days after the fake story about Cursor building a web browser from scratch with GPT-5.2 was debunked. Disbelief should be the default reaction to stories like this.
Btw, after I wrote that initial article ("Cursor's latest "browser experiment" implied success without evidence"), I gave it my own try to write a browser from scratch with just one agent, using no 3rd party crates, only commonly available system libraries, and just made a Show HN about it: https://news.ycombinator.com/item?id=46779522
The end result: Me and one agent (codex) managed to build something more or less the same as Cursor's "hundreds of agents" running for weeks and producing millions of lines of code, in just 20K LOC (this includes X11, macOS and Windows support). Has --headless, --screenshot, handles scaling, link clicking and scrolling, and can render basic websites mostly fine (like HN) and most others not so fine. Also included CI builds and automatic releases because why not.
This project is awesome - it really does render HTML+CSS effectively using 20,000 lines of dependency-free Rust (albeit using system libraries for image rendering and fonts).
A poc that would usually take a team of engineers weeks to make because of lack of cross disciplinary skills can now be done by one at the cost of long term tech debt because of lack of cross disciplinary knowledge.
> Yes, this is what Ai assisted coding is good at.
This is where I wish we spent more energy, figuring out better ways to work with the AI, rather than trying replace some parts wholesale with AI. Wrote a bunch more specifically about that, while I was watching the agent work on the browser itself, here: https://emsh.cat/good-taste/ (it's like a companion-piece I guess)
Would be interested to know what people think of the locking implementation for the net worker pool.
I’m no expert but it seems like a strange choice to me - using a mutex around an MPSC receiver, so whoever locks first gets to block until they get a message.
Is that not introducing unnecessary contention? It wouldn’t be that hard to just retain a sender for each worker and just round robin them
I haven’t looked at the code, but what you’re describing doesn’t sound that bad. If the queue is empty then it doesn’t matter whether a worker is waiting on the lock or waiting on the receiver itself. If the queue is non-empty then whoever has the lock will soon complete the receive and release the lock. It would be better to just use an actual MPMC channel, but if the traffic on the queue isn’t too high then it probably doesn’t make a significant difference. With round robin in contrast, the sender would risk sending a job to a worker that was already busy, unless it took additional measures to avoid that.
I suspect this is just an LLM hallucinating generic thread-safety boilerplate. In an async serverless runtime like Workers this pattern creates blocking risks and doesn't actually solve the distributed consistency problem.
The outrageous part of this is nowhere in the blog post or the repository indicates it's vibe coded garbage (hopefully I didn't miss it?). You expect some level of bullshit in AI company's latest AI vibe coding announcements. This can be mistaken for a classical blog post.
> A production-grade Matrix homeserver implementation
It's getting outright frustrating to deal with this.
Fine, random hype-men gets hyped about stuff and tweets about it, doesn't mind me too much.
Huge companies who used to have a lot of good will putting out stuff like this, seemingly with absolutely zero reviews before hitting publish? What are they doing? Have everyone decided to just give up and give in to the slop? We need "engineering" to make a comeback.
As long as you take ownership, test your stuff and ensure it actually does what you claim it does, I don't mind if you use LLMs, a book or your dog.
I'm mostly concerned that something we used to see as a part of basic "software engineering" (verify that what you build is actually doing what you think it is) has suddenly made a very quick exit from the scene, in chase of outputting more LOC which is completely backwards.
I review every line of code I generate, and make sure I know enough that I can manually reproduce everything I commit if you take away the LLM assistant tomorrow.
This is also what I ask our engineers to do, but it's getting hard to enforce.
If you take ownership of the code you submit, them it does not matter if it was inspired by AI, you are responsible from now on and you will be criticized, possibly you will be expected to maintain as well.
Vibing is incompatible with engineering and this practice is disgusting and NOT acceptable.
I get vibe coding a feature or news story or whatnot but how do you go about not even checking if the thing actually works, or fact checking the blog post?
Optics is the only thing that matters, there are people genuinely pushing for vibe coding on production systems. Actually, all of the big companies are doing this and claiming it is MORE safe because reduces human error.
I'm starting to believe they are all right, actually. Maybe frontier models surpassed most humans, but the bar we should have for humans is really really low. I genuinely believe most people cannot distinguish llms capabilities from their own capabilities, and their are not wrong from the perspective they have.
How could you perceive, out in the wild, an essence that scapes you?
Coming to the comments to brag about ignoring something you clearly didn't ignore (given that you're here in the comments) is actually pretty abnormal behavior.
Normal people don't jerk themselves off about being edgy in public. Hope this helps!
It's clear that on Hacker News many people have made absurdly deep investments into this "technology." There's going to be a long period of pearl clutching we have to dig out of until we get back to the standard hacker ethic of not believing anything published by corporations.
it seems as if literally everyone associated with "AI" is a grifter, shill (sorry, "Independent Researcher"), temporarily embarrassed billionaire, or just a flat out scammer
I would not rule out that sometimes they are just incompetent and believe their own story, because they just don't know it better. Seems this is called a "bad apple"?
Everyone (not really, but basically yes) associated with $current_thing is a rent seeking scammer.
Even if Blockchain has tremendous impact, even if transformers are incredible (really) technology, even if NFTs could solve real world problems...you could basically say the same thing and be right, rounding up, 100% of the time, about anything technology related (and everything else as well). This truly is a clown world, but it is illegal to challenge it (or considered bad faith around here)
The version that was live on GitHub the day they published their blog post was missing compilation instructions, didn't cleanly compile and didn't pass GitHub Actions CI.
The project itself did compile most of the time it was being developed - the coding agents had been compiling it the whole time they were running on it.
The "it didn't even compile" criticism is valid in pointing out that they messed up the initial release, but if you think "it never compiled" you have an incorrect mental model.
It used cssparser and html5ever from the Servo project, and it used the Taffy library for flexbox and CSS grid layout algorithms which isn't officially part of Servo but is used by Servo.
I'd estimate that's a lot less than 60% of the "actual work" though.
My bad, I was misinformed, thanks for correcting me, I thought it used the renderer, not just the parser. Thats honestly way better than what I thought.
I believe it was basically a broken, non-functioning wrapper around Servo internals. That’s what I’d expect from a high schooler who says “i wrote a web browser”, but not what I’d expect from a multi-billion dollar corporation.
They aren't really a multi-billion dollar corporation. A lot of it is them just pumping up their valuation. Stuff like this proves that in a lot of ways.
> This post was updated at 11:15 a.m. Pacific time to clarify that the use case described here is a proof of concept. Some sections have been updated for clarity.
But then the bottom still says:
> Our team is using Matrix on Workers, handling real encrypted communications. It is fast, it is cheap, and it is arguably one of the most secure ways to deploy a homeserver today.
I guess they're dogfooding something that's wildly insecure and incomplete internally. Kind of surprising that's allowed on CloudFlare's internal network if true, but I guess shadow-IT is everywhere.
It is worrying to see a major vendor release code that does not actually work just to sell a new product. When companies pretend that complex engineering is easy it makes it very hard for the rest of us to explain why building safe software takes time. This kind of behavior erodes the trust that we place in their platform.
The real concern is that we've been doing this race to the bottom for so long that it's becoming almost trivial to explain why they are wrong. This over simplification has existed before AI coding and it's the dream AI coding took advantage of. But this market of lemons got too greedy
Since cloudflare are busy editing this blog post to say something completely different from what it originally said, I feel that this archive link is relevant
Hah. The coward even deleted the telltale "not just X; Y" LLM dead-giveaway line from the blog, after someone vomit emoji quoted it in the mastodon thread.
That the original post to HN linked in the blog was done on a throwaway kind of implies a level of awareness (on the part of the dev) that the code/claims were rubbish :)
> Traditionally, operating a Matrix homeserver has meant accepting a heavy operational burden. You aren't just installing software; you are becoming a system administrator. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure reverse proxies, and handle rotation for TLS certificates. It’s a stateful, heavy beast that demands to be fed time and money, whether you are sending one message a day or one million.
I have limited experience with Matrix, but you don't actually need Synapse (reference homeserver) which is quite a resource hog and not even remotely easy to setup/administer.
You can just use the lightweight Continuwuity homeserver for the Matrix part, and Caddy for the reverse proxy/TLS/ACME part, installed on a VPS. Both require minimal configuration, and provide packages for many Linux distributions, as well as Docker images.
(Continuwuity is a fork of conduwuit which was a fork of Conduit. Conduit was abandoned, but is now active again, and there are also other active forks as well. However, it seems to me that Continuwuity is currently the most active fork.)
Wildebeest ceased maintenance one month after the article's publication, adding a similar comment several months later[1]:
> :warning: This project has been archived and is no longer actively maintained or supported. Feel free to for this repository, explore the codebase, and adapt it to your needs. Wildebeest was an opportunity to showcase our technology stack's power and versatility and prove how anyone can use Cloudflare to build larger applications that involve multiple systems and complex requirements.
Honestly I like Cloudflare's CDN and DNS but beyond that I don't really trust much else from them. In the past though their blog has been one of the best in the space and the information has been pretty useful, almost being a gold standard for postmortems, but this seems especially bad. Definitely out of line compared to the rest of their posts. And with the recent Cursor debacle this doesn't help. I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
>I also don't really get their current obsession with porting every piece of software on Earth to Workers recently...
Because their CDN/DNS is excellent software but it's not massive moat. Workers on other hand is.
It's like difference between running something on Kubernetes vs Lambdas. One you can somewhat pivot with between vendors vs other one requires massive rewrites to software that means most executives won't transition away from it due to high potential for failure.
Yeah, I like that I can just upload a static html and host it there for free, but anything more I dunno. Its all about vendor lock-in with their products.
I don't know why cloudflare jumps on any bandwagon with a cloudflare workers version rather then implementing the "classics", like a blog or a forum that you can host with cloudflare workers.
This appears to be the author's first blog post for Cloudflare, Cloudflare being the author's first post-military employer. For his sake and Cloudflare's, this deserves an AAR that I hope becomes a teachable moment for both.
“This architecture shifts the paradigm for self-hosting. It turns "running a server" from a chore into a utility. You get the sovereignty of owning your data without the burden of owning the infrastructure”
Yeah, this is just shameful. Obviously written by an LLM with zero oversight. If this engineer doesn't get fired I'll lose all trust in Cloudflare.
He shouldn't get fired. For all we know he might actually be a decent employee who had a, ekhm, temporary lapse of reason. He didn't destroy anything (except damaging CF brand).
The best CF can do is to post a post-mortem and improve procedures so that can't happen anymore.
Well that is an interesting idea and proof of concept. I agree that the post is not the best I have seen from Cloudflare, and it shouldn't suggest that the code is production ready, but it is an interesting use-case.
LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.
It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?
I usually work in branches in a private repo, squash and merge features / fixes in the private repo, and only merge the clean, verified, extensively tested merges back to public.
You don't need to see every single commit and the exact chronology of my work, snapshots is enough :)
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
DevSecOps Engineer
United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Considering how many times I've heard "don't let perfection be the enemy of good enough" when the code I have is not only incomplete but doesn't even do most of the things asked (yet), I'd wager quite a lot
Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.
This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.
I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
> Every line was thoroughly reviewed and cross-referenced with relevant RFCs
The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.
When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.
I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s
I hope this isn't in bad taste, but I applied for the editor-in-chief position at Cloudflare back in August when they had it open. I'm still very interested in the role. If anyone at cf is reading this, my email is bro @ website in bio.
Blog post now says: "* This post was updated at 11:15 a.m. Pacific time to clarify that the use case described here is a proof of concept. Some sections have been updated for clarity." But parts of it are still misleading.
what? that's like saying "you should implement TLS instead of HTTP"!
They do entirely different things: MLS is a key agreement protocol, equivalent to the Double Ratchet that Matrix uses for E2EE today. Matrix can use both.
MLS is an IETF standard. The server is easy to write, and easy to make scalable (no complicated merge algorithm required, unlike Matrix). Finally, individual chatrooms scale to an order of magnitude larger size vs. Matrix.
MLS is superior in every way to Matrix as it exists today if you need to implement encrypted chat rooms for your app.
Source: Guy who has implemented both, including extending Matrix to scale the server to Twitter scale (by, in essence, making it working like MLS, only worse due to the merge algorithm).
What on earth are you talking about? They do entirely different things! MLS is an E2EE protocol, whereas Matrix is effectively a conversation-syncing protocol which supports multiple E2EE mechanisms, including MLS.
Source: Guy who started Matrix, was in the room at IETF 101 when MLS was proposed and ratified it for Matrix, and has been working away on the various approaches to use MLS on Matrix.
Um what's up with companies trying to recreate really big projects using vibe coding.
Like okay, I am an indie-dev if I create a vibe coded project, I create it for fun (I burn VC money of other people doing so tho but I would consider it actually positive)
But what's up with large companies who can actually freaking sponsor a human to do work make use of AI agents vibe code.
First it was cursor who spent almost 3-5 million$ (Just came here after watching a good yt video about it) and now Cloudflare.
Like, large corpos, if you are so much interested in burning money, atleast burn it on something new (perhaps its a good critique of the browser thing by Cursor but yeah)
I am recently in touch with a person from UK (who sadly got disabled due to an accident when he was young) guy who is a VPS provider who got really impacted by WHMCS increase in bill and He migrated to 1200 euros hostbill. Show him some HN love (https://xhosts.uk/)
I had vibe coded a golang alternative. Currently running it in background to create it better for his use cases and probably gonna open source it.
The thing with WHMCS alternatives are is that I made one using gvisor+tmate but most should/have to build on top of KVM/QEMU directly. I do feel that WHMCS is definitely one of the most rent seeking project and actually writing a golang alternative of it feels sense (atleast to me)
Can there not be an AI agent which can freaking detect what people are being charged for (unfairly) online & these large companies who want to build things can create open source alternatives of it.
I mean I am not saying that it stops being slop but it just feels a good way of making use of this tech aside from creating complete spaggeti slop nobody wants, I mean maybe it was an experiment but now it got failed (Cursor and this)
A bit ironic because I contacted the xhosts.uk provider because I wanted to create a cloudflare tunnels alternative after seeing 12% of internet casually going through cf & I saw myself being very heavily reliant on it for my projects & I wasn't really happy about my reliance on cf tunnels ig
My guess, a program manager high up in the engineering org and not a people manager. But suggesting a high up program manager doesn't direct people is also wrong. TPMs "make the wheels go 'round" in engineering. They very much control the fate of other individual, and often whole teams so their integrity and capability both matter considerably which means they should not be passing themselves off as a coder or their individual code projects as production ready.
Product Managers are generally not "Senior Engineering," though I suppose it is possible. IMO, it's a whole lot more likely a program manager than a product manager.
I think it's a pretty big deal for a major company to put out a blog post about something that is "production grade" and pushing customers to use it without actually making it production grade.
I’m plenty calm. There’s just nothing to debate here: the blog post and repo are a conscious, deliberate, and egregious misrepresentation of fact.
I would absolutely say exactly the same things to the author’s face as I’m saying right now. I would never work for a company that condones this in a million years, as a matter of principle.
And you don't seem to understand how the conversation went. I was obviously talking about my first comment, to which they answered.
> Which comments do you see doing that? Exactly?
Interestingly, those that made me write my first message were removed. Not that it was because of my message obviously, which mostly got me downvotes :-).
But the next best one would be:
"public shaming is the next best thing. I sincerely hope links to this incident will haunt him every time someone googles his name forevermore"
(after implying that ideally they should lose their job for this)
This is a bit more than overselling a proof of concept. He made claims that were not correct, and presented some LLM generated code as point of pride. And not on his blog, but a company's website.
He's emblematic of the era we now live in. Vibe coded projects that the "developer" didn't learn anything from, posted using LLMs. People have zero shame, zero curiosity, zero desire in learning and understanding what they're working on.
Also it doesn't make sense to escalate an interaction by swearing at a person and simultaneously asking them to calm down.
> Also it doesn't make sense to escalate an interaction by swearing at a person and simultaneously asking them to calm down.
I found it fun :-).
I kindly ask to try to empathise with a random human being who is most certainly not used to be shamed publicly, and they tell me to check myself in the mirror.
In a real "engineering" role, this person would be stripped of their license for stamping "production grade" on a bunch of AI slop.
That doesn't exist in our trade, so yeah, public shaming is the next best thing. I sincerely hope links to this incident will haunt him every time someone googles his name forevermore.
There was a piece a little while back, most probably from Cory Doctorow, about how some humans have already become Reverse Centaurs:
Controlled by a machine and only there to put their names and reputations on the line when the machine messes up.
Maybe this applies more to a writer having to generate 20 articles per hour in some journalism sweatshop, pressured to push out anything that will catch the winds of SEO augmented news, but I would not discount the level of pressure that the author of the blog post was put under to produce something, anything...
Based on the published profile, I strongly suspect that this person is not paid that well at all. you are not looking at a FAANG kind of deal here most certainly.
So maybe spare one second of thought for that future where many many folks are just there to be burnt up in some cancellation machine whilst profit gets accumulated elsewhere...
As you say, it's pretty hard to say that the average quality of software engineering makes it deserve the word "engineering" at all. Most software is bad accross the board, and developers on average get pretty good salaries for... whatever they bring to the world.
Still I don't think that some random employee deserves to be harassed and publicly shamed for a bad blog post.
> In other industries this would be a gross ethical issue and potentially a legal one
Yes, but this is not another industry. Also in other industries, some say that "full self-driving is coming tomorrow" or "we can send millions of people to live on mars".
> public criticism for public fraudulence is "harassment", I guess? C'mon, man.
I never said "don't criticise". I have seen comments that I found very disrespectful early when this post started growing, and I tried to call for some empathy for the human being who made that mistake.
The person who wrote the article probably does not benefit from lying, I don't think it was the intent. It is a bad post, don't get me wrong, but maybe there is no need to insult the author just for that.
When called out, they deleted the TODOs. They didn't implement them, they didn't fix the security problems, they just tried to cover it up. So no, at this point the dishonesty is deliberate.
I don't know enough about this specific implementation to say whether "implemented Matrix" is accurate or marketing stretch. But the pattern of "we did X" blog posts that turn out to be "we did a demo of part of X" is getting tiresome across the industry.
The fix is boring: just be precise about what you built. "We prototyped a Matrix homeserver on Workers with these limitations" is less exciting but doesn't erode trust.
reply