An important part of the story here, not mentioned in this post but noted elsewhere (https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...), is that they gave up on offering client libraries for languages other than JavaScript/TypeScript. Doing this while mostly sharing a single implementation among all languages was much of the original reason to use Rust, because Rust is a good "lowest common denominator" language for FFI and TypeScript is not; it wasn't entirely about performance. If they hadn't tried to do this, they would likely never have used Rust; if they hadn't given up on it, they would likely still be using Rust.
Yeah, the whole point of Prisma 2 was to be multi language and multi-DB with a Rust server in between you and the DB. There are a lot of advantages to that approach in enterprise. You can do better access control, stats, connection pooling, etc (Formal is a YC company in that space). Prisma 1 was a scala implementation of that vision.
Anyway, end of an era. There were a couple community bindings in Python and Java that are now dead I assume. I was heavily invested in Prisma around 4-5 years ago, that is funnily enough what got me started on my Rust journey.
I'm sure you could get even greater speed by removing Prisma. All you need is a migration tool and a database connection. Most recent example in my work where we removed an ORM resulted in all of our engineers, particularly juniors becoming Postgres wizards.
Congratulations, you have now increased the cognitive load to be productive on your team and increased the SQL injection attack surface for your apps!
I jest, but ORMs exist for a reason. And if I were a new senior or principal on your team I’d be worried that there was now an expectation for a junior to be a wizard at anything, even more so that thing being a rich and complex RDBMS toolchain that has more potential guns pointing at feet than anything else in the stack.
I spent many years cutting Rails apps and while ActiveRecord was rarely my favourite part of those apps, it gave us so much batteries included functionality, we just realised it was best to embrace it. If AR was slow or we had to jump through hoops, that suggested the data model was wrong, not that we should dump AR - we’d go apply some DDD and CQRS style thinking and consider a view model and how to populate it asynchronously.
I think this needs some nuance - this is definitely true in some domains.
Most of the domains I worked in it was the other way around: using an ORM didn’t mean we could skip learning SQL, it added an additional thing to learn and consider.
In the last years writing SQLAlchemy or Django ORM the teams I was on would write queries in SQL and then spend the rest of the day trying to make the ORM reproduce it. At some point it became clear how silly that was and we stopped using the ORMs.
Maybe it’s got to do with aggregate-heavy domains (I particularly remember windowing aggregates being a pain in SQLAlchemy?), or large datasets (again memory: a 50-terabyte Postgres machine, the db would go down if an ORM generated anything that scanned the heap of the big data tables), or highly concurrent workloads where careful use of select for update was used.
> In the last years writing SQLAlchemy or Django ORM the teams I was on would write queries in SQL and then spend the rest of the day trying to make the ORM reproduce it.
Ah yes, good times! Not Django for me but similar general idea. I'm not a big fan of ORMs: give me a type safe query and I'm happy!
Same here, I am a big advocate of knowing your SQL, and stored procedures, no need to waste network traffic for what is never going to be shown on the application.
Using an ORM and escape hatching to raw SQL is pretty much industry standard practice these days and definitely better than no ORM imho. I have code that's basically a lot of
result = orm.query({raw sql}, parameters)
It's as optimal as any other raw SQL query. Now that may make some people scream "why use an ORM at all then!!!" but in the meantime;
I have wonderful and trivially configurable db connection state management
I have the ability to do things really simply when i want to; i still can use the ORM magic for quick prototyping or when i know the query is actually trivial object fetching.
The result passing into an object that matches the result of the query is definitely nicer with a good ORM library than every raw SQL library i've used.
Every project I've come across that uses an ORM has terrible database design. All columns nullable, missing foreign key indexes, doing things in application code that could easily be done by triggers (fields like created, modified, ...), wrong datatypes (varchar(n) all over the place, just wwwhhhhyyy, floats for money, ...), using sentinel values (this one time, at bandcamp, I came across a datetime field that used a sentinel value and it only worked because of two datetime handling bugs (so two wrongs did make a right) and the server being in the UTC timezone), and the list goes on and on...
I think this happens because ORMs make you treat the database as a dumb datastore and hence the poor schema.
Honestly database schema management doesn't scale particularly well under any framework and i've seen those issues start to crop up in every org once you have enough devs constantly changing the schema. It happens with ORMs and with raw SQL.
When that happens you really really should look into the much maligned no-sql alternatives. Similarly to the hatred ORMs get, no-sql data stores actually have some huge benefits. Especially at the point where db schema maintenance starts to break down. Ie. Who cares if someone adds a new field to the FB Newsfeed object in development when ultimately it's a key-value store fetched with graphQL queries? The only person it'll affect is the developer who added that field, no one else will even notice the new key value object unless they fetch it. There's no way to make SQL work at all at scale (scale in terms of number of devs messing with the schema) but a key-value store with graphQL works really well there.
Small orgs where you're the senior eng and can keep the schema in check on review? Use an ORM to a traditional db, escape hatch to raw SQL when needed, keep a close eye on any schema changes.
Big orgs where there's a tons of teams wanting to change things at high velocity? I have no idea how to make either SQL or ORMs work in these cases. I do know from experience how to make graphQL and a key-value store work well though and that's where the above issues happen in my experience. It's really not an ORM specific issue. I suggest going down the no-sql route in those cases.
NoSQL is even worse, data gets duplicated and then forgotten, so it doesn't get updated correctly, or somebody names a field "mail" and another person names it "email" and so on...
There is zero guarantee that whatever you ask the database for contains anything valid, so your code gets littered with null and undefined checks, and if you ask for example a field "color" what is it going to contain? A hex value? rgb(), rgba(), integer? So you need to check that too.
In my experience NoSQL is even worse, they are literally data dumps (as in garbage dump).
This is a decent example of not buying, getting pulled, or being forced into any corporate pushed hype or eliminating one's options. They re-evaluated and looked at what programming language was best for their situation, which was removing the Rust language and using something else. It then turned out, they actually got gains in greater user contributions, simplicity, efficiency, and even speed.
> what programming language was best for their situation, which was removing the Rust language and using something else
This is correct, but I'd say that the key was removing Rust and not using something else. Fewer moving parts, fewer JS runtime boundaries to cross, no need to make certain that the GC won't interfere, etc.
Also, basically any rewrite is a chance to drop entrenched decisions that proved to be not as great. Rewriting a large enough part of Prisma likely allowed to address quite a few pieces of tech debt which were not comfortable to address in small incremental changes. Consider "Prisma requires ~98% fewer types to evaluate a schema. Prisma requires ~45% fewer types for query evaluation.": this mush have required quite a bit of rework of the whole thing. Removing Rust in the process was likely almost a footnote.
> This is a decent example of not buying, getting pulled, or being forced into any corporate pushed hype
It seems that maybe they did get hyped into Rust, because it's not clear why they believed Rust would make their JavaScript tool easier to develop, simpler, or more efficient in the first place.
Biome and oxc are developer tools. I don't know why in the world they would do this, but it sounds like they were using Rust at runtime to interact with the database?
I claim that 99.9999% of software should be written in a GC language. Very, very, very few problems actually requires memory management. It is simply not part of the business requirement. That said, how come the only language closest to this criteria is Go except it hasn't learn about clean water (algebraic data types).
Meanwhile, earlier this week we had a big conversation about the skyrocketing costs of RAM. While it's technically true that GC doesn't mean a program has to hold allocations longer than the equivalent non-GC code, I've personally never seen a GC'ed program not using multiple times as much RAM.
And then you have non-GC languages like Rust where you're hypothetically managing memory yourself, in the form of keeping the borrow checker happy, you never see free()/malloc() in a developer's own code. It might as well be GC from the programmer's POV.
You can add your own custom GC in C — you can add your own custom anything to any language; its all just 1s and 0s at the end of the day — but it is not a feature provided by the language out of the box like in Rust. Not the same token at all. This is very different.
Large parts of web browsers (like entire Firefox' UI) is written in javascript already
Operating systems _should_ use GC languages more. Sure, video needs to have absolute max performance... but there is no reason my keyboard or mouse driver should be in non-GC language.
I'm in the "Pro-Rust" camp (not fanboy level "everything must be rewritten in rust", but "the world would be a better place if more stuff used Rust"), and I love this post.
They saw some benefits to Rust, tried it, and continued to measure. They identified the Typescript/Rust language boundary was slow, and noticed an effect on their contributions. After further research, they realized there was a faster way that didn't need the Rust dependency.
I'm not sure your characterization is all that accurate.
Originally, they thought they could build a product that worked across many languages. That necessitated a "lowest common denominator" language, which is a void that has always been strangely lacking in choice, to provide an integratabtle core. Zig had only been announced a few months earlier, so it wasn't really ready to be a contender. For all intents and purposes, C, C++, and Rust were the only options.
Once the product made it to market, it became clear that the Typescript ecosystem was the only one buying in. Upon recognizing the business failure, the "multi-language" core didn't make sense anymore. It was a flawed business model that forced them into using Rust (could have been C or C++ instead, but yeah) and once they gave up on that business model they understood that it would have been better to have been written it in Typescript in the first place — and it no doubt would have been if it weren't for the lofty pie in the sky dreams of trying to make it more than the market was willing to accept. Now they got the opportunity to actually do it.
> I'm in the "Pro-Rust" camp (not fanboy level "everything must be rewritten in rust", but "the world would be a better place if more stuff used Rust")
While techno-religiosity is irrational and unprofessional, but that's some weak, eye-rolling both-sidesism.
The world would be a better place™ if more stuff used better and more formal tools and methods to prove that code is correct and bug-free. This is easier in some languages than others, but still, there's a lot of formal verification tools that aren't used enough and methodology patterns and attitudes that are missing from crucial projects and regular use. Effective safety and security assurance of software creation takes effort and understanding that a huge fraction of programmer non-software engineers don't want to undertake or know anything about. This needs to change and is defensible, marketable expertise that needs to be appreciated and cannot be replaced by AI anytime soon. There's no "easy" button, but there are a lot of "lazy" "buttons" that don't function as intended.
Good. Rust is fine, but it makes you pay a complexity tax for manual memory management that you just don't need most of the time. In almost all real world cases, a GC is fine. TypeScript is a memory-safe language, just like Rust, and I can't imagine a database ORM of all things needing manual memory management to get good performance. (Talking to the database, not memory management, is the bottleneck!)
I don’t think the problems they were dealing with had much to do with any of those properties of Rust. Their issue seems to have been that they weren’t using native JavaScript/TypeScript and that their situation was improved by using native TypeScript.
If they had been using something like Java or Go or Haskell, etc, they may well have had even more downsides.
The trait/type system can get pretty complex. Advanced Rust doesn't inherit like typical OOP, you build on generics with trait constraints, and that is a much more complex and unusual thing to model in the mind. Granted you get used to it.
OOP inheritance is an anti-pattern and hype train of the 90's/00's, especially multiple inheritance. Especially the codebases where they create extremely verbose factories and abstract classes for every damn thing ... Java, C++, and the Hack (PHP-kind) shop are frequently guilty of this.
Duck typing and selective traits/protocols are the way to go™. Go, Rust, Erlang+Elixir... they're sane.
What I don't like about Rust is the inability to override blanket trait implementations, and the inability to provide multiple, very narrow, semi-blanket implementations.
Finally: People who can't/don't want to learn multiple programming language platform paradigms probably should turn in their professional software engineer cards. ;)
Sure, if you define "automatic memory management" in a bespoke way.
> Could you be more specific?
The lifetime system, one of the most complex and challenging parts of the Rust programming model, exists only so that Rust can combine manual memory management with memory safety. Relax the requirement to manage memory manually and you can safely delete lifetimes and end up with a much simpler language.
reply