Enough to consider employing others to keep the business running without me, not enough to employ myself full-time, unfortunately. For the "from what?" bit, see my bio.
I mean, cap'n'proto is written by the same person who created protobuf, so they are legit (and that somewhat jokish claim is simply that it requires no parsing).
Google loves to reinvent shit because they didn't understand it. And to get promo. In this case, ASN.1. And protobufs are so inefficient that they drive up latency and datacenter costs, so they were a step backwards. Good job, Sanjay.
Really dismissive and ignorant take from a bystander. Back it up with your delivery that does better instead of shouting with a pitchfork for no reason.
This bystander has been using protobufs for more than ten years. I'm not sure what I need to deliver since ASN.1, Cap'n Proto and Flatbuffers are all more efficient and exist already. ASN.1 was on the scene in 1984 and was already more efficient than protobufs.
Protobuf has far better ergonomics than ASN.1. ASN.1 is an overcomplicated design-by-committee mess. Backwards compatibility in particular is much harder.
I don't doubt your experience, but with X.509 having evolved substantially, and ASN.1 on billions (if not tens of billions) of devices, in practice it seems OK. And it was formally verified early.
ASN.1 on billions of devices doesn’t make it less of an anti-ergonomic, design-by-committee piece of crap. Unless your goal is to be binary-compatible with these devices, one should be free to explore alternatives.
By all means, keep using it, but it might be worth figuring out why other people don’t. Hint: it’s not because they’re more stupid than you or are looking to get promoted by big G.
(Personally, I like the ideas and binary encoding behind Capn Proto more than all the alternatives)
One of the advantages of of protobuf I never see anyone highlight is how neat and well-designed the wireformat is, in terms of backward/forward compatibility and lowlevel stuff you can do with it. Very useful when building big and demanding systems over time.
For high performance and critical stuff, SBE is much more suitable, but it doesn't have as good of a schema evolution story as protobuf.
I agree. It might be faster if you don't actually deserialise the data into native structs but then your codebase will be filled with fairly horrific CapnProto C++ code.
Your postscript explains why: using the same "btn-primary" as every other user of the framework hints that you're not building something with its own visual identity.
For the rest of us, we throw that "bg-sky-500 hover:bg-sky-600 active:bg-sky-700 text-white px-4 py-2 rounded-lg" (or whatever color and shape matches our brand) into a component with a variant=primary property and call it a day. What developers actually see on a day-to-day basis is <Button variant="primary" />.
> Your postscript explains why: using the same "btn-primary" as every other user of the framework hints that you're not building something with its own visual identity.
You know that bootstrap is trivial to customise right?
It turns out identifying primary and secondary buttons is a pretty standard thing in any kind of UI that has... buttons.
TailwindCSS is useful for applying styles to isolated components, in paper-shredder scenarios. Devs using it get to ignore the cascade, don't have to name things, and can use the predefined spacing and colors.
It is of course quite unmaintainable (good luck with updating the class soup for a bunch of components across a project).
I personally just ... cannot. CSS in 2026 is incredibly powerful and beautiful. Embracing the cascade allows for minimal CSS (see ITCSS methodology). Standardizing spacing and type with https://utopia.fyi is brilliant. Standardizing colors with custom props is trivial.
But, it seems that a lot of people are not paid to think about CSS. Tailwind embraces that. LLMs love it, because it reduces the complexity of pure CSS.
LLMs are quite capable of rewrites these days - there are few tasks where I'd actually want 10 parallel agents, but rewriting off Next.js would've been faster with that setup.
(I ended up just using the claude web interface and making it use a checklist, took 8 hours)
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
I tried to run it locally some time ago, but it's buggy as hell when self-hosted. It's not even worth trying out given that CF itself doesn't suggest it.
I'm curious what bugs you encountered. workerd does power the local runtime when you test CF workers in dev via wrangler, so we don't really expect/want it to be buggy..
Specifically, half of the services operate locally, and the other half require CF services. I mainly use Claude Code to develop, and it often struggles to replicate the local environment, so I had to create another worker in CF for my local development.
Initially, the idea was to use CF for my side projects as it's much easier than K8S, but after wrestling with it for a month, decided that it's not really worth investing that much, and I moved back to using K8S with FluxCD instead, even though it's overkill as well.
> There is a big "WARNING: This is a beta. Work in progress"
Ughhhh that is because nobody ever looks at the readme so it hasn't been updated basically since workerd was originally released. Sorry. I should really fix that.
> Specifically, half of the services operate locally, and the other half require CF services.
workerd itself is a runtime for Workers and Durable Objects, but is not intended to provide implementations of other services like KV, D1, etc. Wrangler / miniflare provides implementations of most of these for local testing purposes, but these aren't really meant for production.
But workers + DO alone is enough to do a whole lot of things...
Thanks a ton for the quick response! I totally get that workerd is not intended to be the emulator of all CF services, but the fact that I will still need an external dependency for local development, and the code I developed can't be used outside of CF environment, makes me feel like I'm locked in to the environment.
I'm mostly using terminal agents to write and deploy code. I made a silly mistake, not reviewing the code before merging it into main (side project, zero user), and my durable object alarms got into an infinite loop, and I got a $400 bill in an hour. There was no way to set rate limits for AI binding in workers, and I didn't get any notification, so I created a support ticket 2 months ago, which hasn't answered to this date.
That was enough for me to move out of CF as a long-time user (>10 years) and believer (CF is still one of my biggest stocks). In a world where AI writes most of the code, it's scary to have the requirement to deploy to a cloud that doesn't have any way to set rate limits.
I learned the hard way that I must use AI Gateway in this situation, but authentication is harder with it, and agents prefer embedded auth, which makes it pick AI binding over AI Gateway. With K8S, it's not easy to maintain, but at least I can fully control the costs without worrying about cost of experimentation.
Lenovo has fantastic recent refurbs. It's a bit of a game, but you can find some for around 400$ or less.
My big beef with Macs is I need BIG ssds. If I want to get a 4TB SSD on a Macbook it starts at around 3000$. Recently I purchased a laptop with 2 SSD slots, although disappointingly only one is easy to access.
I'm tempted to go to Microcenter and tell them to replace the stock SSD with a 4TB( the stock SSD is the one behind a difficult to remove heat sink), and then I'd put another 4 tb ssd. Alternatively I could just pay 800$ for a 8TB SSD, install it in a laptop that cost around 1300-1500$ and I'm only spending 2300$.
On a Mac that's about 5000$. I make music and hate external drives with a passion.
I did something similar to get OnlineOrNot's twitter handle - I realised that unclaimed names would 404 and so I set up a check to get an alert when that happened.
reply