Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.
He hints at Electron in the end, but I think the real blame lies on React which has become standard in the past five years.
Nobody has any fucking idea what’s going on in their react projects. I work with incredibly bright people and not a single one can explain accurately what happens when you press a button. On the way to solving UI consistency it actually made it impossible for anyone to reason about what’s happening on the screen, and bugs like the ones shown simply pop up in random places, due to the complete lack of visibility into the system. No, the debug tooling is not enough. I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
>I’m really looking forward to whatever next thing becomes popular and replaces this shit show.
I'm with you, but motivation to really learn a system tanks when there's something else on the horizon. And what happens when new-thing appears really great for the first 1-2 years, but goes downhill and we're back to asking for its replacement only 5 years after its release? That tells me we're still chasing 'new', but instead of a positive 'new', it's a negative one.
This was also reinforced constantly by people claiming you'll be unemployable if you aren't riding the 'new' wave or doing X amount of things in your spare time.
It's a natural consequence of an industry that moves quickly. If we want a more stable bedrock, we MUST slow down.
I completely agree, here. React has replaced the DOM, and it's pretty fast, pretty efficient when you understand its limitations... but when you start rendering to the canvas or creating SVG animation from within react code, everything is utterly destroyed. Performance is 1/1000 of what the platform provides. I have completely stopped using frameworks in my day-to-day, and moved my company to a simple pattern for updatable, optionally stateful DOM elements. Definitely some headaches, some verbosity, and so forth. But zero tool chain and much better performance, and the performance will improve, month-by-month, forever.
No one's going back to the messy spaghetti-generating "just-jQuery" pattern of yore.
I devised a way of using plain closures to create DOM nodes and express how the nodes and their descendants should change when the model changes. So a component is a function that creates a node and returns a function not unlike a ReactComponent#render() method:
props => newNodeState
When called, that function returns an object describing the new state of the node. Roughly:
So, it's organized exactly like a react app. A simple reconciliation function (~100 lines) applies a state description to the topmost node and then iterates recursively through childNodes. A DOM node is touched if and only if its previous state object differs from its current state object - no fancy tree diffing. And we don't have to fake events or any of that; everything is just the native platform.
I implemented an in-browser code editor this way. Syntax highlighting, bracket matching, soft wrap, sophisticated selection, copy/cut/paste, basic linting, code hints, all of it... It edits huge files with no hint of delay as you type, select, &c. It was a thorough test of the approach.
Also, when we animate something, we can hook right in to the way reconciliation works and pass control of the update to the component itself, to execute its own transition states and then pass control back to the reconcile function... This has made some really beautiful things possible. Fine grained control over everything - timing, order, &c. - but only when you want it.
I am sorry, but I don't fully understand. To me it sounds like you are describing exactly tree diffing when you say that the next node is only touched if its state object changed.
I have been through this struggle too. Of wanting to get rid of bloated tools and tools I don't understand, and the best I've found for this is Hyperapp. I've read the source code a few times (was thinking about patching it to work well with web components), so I feel it falls into a category of tools I can use. But I'm genuinely interested in understanding what you've done if it offers an alternative (even if more clunky).
>>> it sounds like you are describing exactly tree diffing
The object returned by the function expresses a tiny subset of the properties of a DOM node. Often just {className, childNodes: [...]}. Only those explicit attributes are checked for update or otherwise dealt with by my code. My code has no idea that a DOM node has a thousand properties. By contrast, a ReactComponent is more complex from a JS POV than a native DOM node.
In other words, if my code returns: {className: 'foo'} at time t0, and then returns {} at time t1, the className in the DOM will be 'foo' at t0 and at t1. That is not at all how exhaustive tree diffs work, and not at all how react works.
With 5,000 nodes, you might have 8K-15K property comparisons. Per-render CPU load thus grows linearly and slowly with each new node. I can re-render a thousand nodes in 2-5 milliseconds with no framework churn or build steps or any of that. But more importantly, we have the ability to step into "straight-to-canvas" mode (or whatever else) without rupturing any abstractions and without awkward hacks.
This is unidirectional data flow plus component-based design/organization while letting the DOM handle the DOM: no fake elements, no fake events -- nothing but utterly fast strict primitive value comparisons on shallow object properties.
EDIT: Earlier I said that a node changes if and only if its state description changed; that is not strictly true. "if and only if" should just be "only if".
This makes a lot of sense. It's essentially giving up some "niceness" that React gives to make it faster and closer to the metal. That sounds like a critique, but that's what this whole thread is about, and one way to approach something I've also given a lot of thought.
To do this, I imagine you will have to do certain things manually. I guess you can't just have functions that return a vdom, because, as you say, the absence of a property doesn't mean the library will delete it for you. So do you keep the previous vdom? Patch it manually and then send it off to update the elements? ... I guess it's a minor detail. Doesn't matter.
Interesting approach, thanks for sharing! I will definitely spend some time looking into it. Encouraging that it seems to be working out for you :)
To answer your technical question: You can approach it one of two ways (I've done both). The first you hinted at. You can keep the last state object handy for the next incoming update and compare (e.g.) stateA.className against stateB.className, which is extremely fast. But you have an extra object in memory for every single node, which is a consideration. You can also just use the node itself and compare state.className to node.className. Turns out this is ~90-100% as fast ~95% of the time, and sips memory.
If you're thinking, "wait, compare it to the DOM node? That will be slow!" - notice that we're not querying the DOM. We're checking a property on an object to which we have a reference in hand. I can run comparisons against node.className (and other properties) millions of times per second. Recall that my component update functions return an object of roughly the form:
That first property is the DOM node reference, so there's no difficulty in running the updates this way. Things are slower when dealing with props that need to be handled via getAttribute() and setAttribute(), but those cases are <10%, and can be optimized away by caching the comparison value to avoid getAttribute(). There are complications with numerical values which get rounded by the DOM and fool your code into doing DOM updates that don't need to happen, but it's all handle-able.
Maybe React has the advantage as the project grows? From what I understand it batches updates from all the different components on the page, avoiding unnecessary reflows that might easily creep in when you do things the old-fashioned way.
I think my favourite fact(oid) to point out here would be that the React model is essentially the same thing as the good ol' Windows GUI model. The good ol' 1980s Windows, though perhaps slightly more convenient for developers. See [0].
I think it's good to keep that in mind as a reference point.
It's just the underlying model that is similar, but React is pretty good at abstracting all that (unlike Win32).
When it comes to developer experience I'd say that React and company are ahead of most desktop UI technologies, and has inspired others (Flutter, SwiftUI).
Apparently there's React Studio, BuilderX and tools like Sketch2React and Figma to React. Ionic Studio will probably support React in a close future (maybe it already supports).
What is better? Jquery? It comes with its own can of worms and React designers had solid reasoning to migrate away from immediate DOM modification. In general UI is hard. Nice features like compositing, variable width fonts, reflow etc come with the underlying mechanisms that are pretty complicated and once something behaves different to the expectations it might be hard to understand why.
jQuery: 88KB, standard everywhere, one entity responsible for all of it, people know what it is and what it does, if it breaks you know what went wrong and who to blame.
Literally anything built with NPM: megabytes? tens of megabytes? in size, totally inscrutable, code being pulled in from hundreds of megabytes of code in tens of thousands of packages from hundreds or thousands of people of unknown (and unknowable) competence and trustworthiness, if it breaks not only do you not know who to blame but you probably have literally no idea what wrong.
It Depends, as always. The problem React was originally solving was that DOM updates cause re-rendering which can be slow; jquery (usually) works directly in the DOM, so applications heavy in updates don't perform well.
So initially an equivalent React and jQuery app would have React look a lot faster, due to smart / batched DOM updates. However, because React is so fast it made people create apps differently.
As always in software development, an application will grow to fill up available performance / memory. If people were to develop on intentionally constricted computers they would do things differently.
(IIRC, at Facebook they'll throttle the internet on some days to 3g speeds to force this exact thing. Tangentially related, at Netflix (iirc) they have Chaos Monkey which randomly shuts down servers and causes problems, so errors are a day to day thing instead of an exception they've not foreseen).
React is just so, so much nicer to work with. It's easy to be dismissive if you've never had to develop UIs with jQuery and didn't experience yourself the transition to React which is a million times better in terms of developer experience.
I feel like people that don't build UIs themselves think of them too much in a completely functional way as in "it's just buttons and form inputs that do X", and forget about the massive complexity, edge cases, aesthetic requirements, accessibility, rendering on different viewports, huge statefulness, and so on.
Old is better is just not true here. React is a dream. Synthetic eventing, batched updates, and DOM node re-use are so good. I rolled my own DOM renderer recently and remembered a lot of problems from the past that I would not like to re-visit.
Write your own framework-like code with just jQuery and watch it turn into a pile of mush. React is many things, but it is absolutely better than jQuery or Backbone. People always mis-use new technology; that isn't React's fault.
To an extent, UI was solved in 1991 by Visual Basic. Yes, complex state management is not the best in a purely event-based programming model. Yes, you didn’t get a powerful document layout engine seamlessly integrated to the UI. Yes, theming your components was more difficult. And so on. But… if the alternative is what we have now? I’m not sure.
One big disadvantage with Visual Basic (and similar visual form designers) is that you can't put the result in version control and diff or merge it in any meaningful way.
UI is hard because you're using a hyper text language with fewer features than were the standard in the 60s. Then with styling on top of that, then with a scripting language on top of that.
Reading Computer Lib/Dream Machine over the holidays and I wonder where it all went so wrong.
Free markets hate good software. "Good" meaning secure, stable, and boring.
On both ends.
Software developers hate boring software for pragmatic HR-driven career reasons and because devs are apes and apes are faddish and like the shiny new thing.
And commercial hegemony tends to go to the companies that slap something together with duct tape and bubble gum and rush it out the door.
So you get clusterfucks like Unix winning out against elegantly designed Lisp systems, and clusterfucks like Linux winning out against elegantly designed Unix systems, and clusterfucks like Docker and microservices and whatever other "innovations" "winning out" over elegantly design Linux package management and normal webservers and whatnot.
At some point someone important will figure out that no software should ever need to be updated for any reason ever, and a software update should carry the same stigma as...I don't know...adultery once carried. Or an oil spill. Or cooking the books. Whatever.
But then also it's important to be realistic. If anyone ever goes back and fixes any of this, well, a whole lot of very smart people are going to go unemployed.
Free markets hate unchanging software. Software churn generates activity and revenue, and the basic goal of the game is to be the one controlling the change. Change is good when you have your hands on the knobs and levers, bad when someone else does. Organizations try to steer their users away from having dependencies on changes that they don't control. "You're still using some of XYZ Corp's tools along with ABC's suite? In the upcoming release, ABC we will help you drop that XYZ stuff ..."
That brings to mind one common computer scientest fallacy - that elegence is an end to itself. It may share some properties which make it practical but unfortunately it is not in practice.
Recursive solutions are more elegant but you still use a stack and while loop to not smash the stack.
Scheme is properly tail-recursive and has been around since 1975. Most (all?) Common Lisp implementations have proper tail recursion. Clojure has tail call optimization for simple cases and only if you explicitly ask for it, but it gets you most of the way there most of the time.
So there are reasons to prefer more imperative languages and their systems, but stack-smashing isn't one of them.
This a thousand times. It's amazing how each new layer of abstraction becomes the smallest unit of understanding you can work with. Browser APIs were the foundation for a while, then DOM manipulation libs like jquery, and now full blown view libraries and frameworks like react and angular.
If someone's starting a new website project (that has potential to become quite complex), what would you recommend is the best frontend technology to adapt then?
Flutter is a very good bet IMO. It uses Dart was designed from the ground up to be a solid front end language instead of building on top of JS. The underlying architecture of flutter is clearly articulated and error messages are informative. Still seems a bit slow and bloated in some aspects but it is getting better every day and I think their top down control of the stack is going to let them trim it all the way down.
I take it you’re thinking of virtual DOM only, which is not the problem, or the component class which hides all of the details.
React is huge, it’s unlikely you’ll implement the synthetic events, lifecycle hooks, bottom up updates, context, hooks with their magical stacks, rendering “optimizations” and all of react-specific warts.
There are simple reimplementations like hyperapp and preact and I completely recommend using those instead. I really meant React the library and ecosystem are at fault, not the general model.
This doesn't seem unique to React projects. Can anyone explain what is happening under the hood in their Angular projects? How about Vue? It seems to be a failing of all major UI frameworks, lots of complexity is abstracted away.
It might be the next big thing, but Svelte doesn't solve the problem outlined in the root of this subthread: nobody has any idea what the fuck is going on.
I like Svelte, the simplicity of programming in it is great, and it has several advantages compared to React. But I have no idea how it works, past a point of complexity. Like, yes: I can run the compiler and check out the JS it generates, same as I can do in React. For simple components, sometimes the compiled code even makes sense. But when I introduce repeated state mutations or components that reference each other, I no longer know what's going on at all, and I don't think I'm alone in this.
Svelte might be an improvement in ergonomics (and that's a good and much needed thing!) but it does nothing to answer the obscurity/too-far-up-the-abstraction-stack-itis that GP mentioned. The whole point of that is frameworks/abstraction layers that tell you "you don't need to understand what's going on below here" are . . . maybe not lying, exactly, but also not telling the whole truth about the costs of operating at that level of both tooling abstraction and developer comprehension.
Time is money and engineers aren't given time to properly finish developing software before releases.
Add to this the modern way of being able to hotfix or update features and you will set an even lower bar for working software.
The reason an iPod didn't release with a broken music player is that back then forcing users to just update their app/OS was too big an ask. You shipped complete products.
Now a company like Apple even prides itself by releasing phone hardware with missing software features: Deep Fusion released months after the newest iPhone was released.
Software delivery became faster and it is being abused. It is not only being used to ship fixes and complete new features, but it is being used to ship incomplete software that will be fixed later.
As a final sidenote while I'm whining about Apple: as a consultant in the devops field with an emphasis on CI/CD, the relative difficulty of using macOS in a CI/CD pipeline makes me believe that Apple has a terrible time testing its software. This is pure speculation based on how my experience. A pure Apple shop has probably solved many of the problems and hiccups we might run into, but that's why I used the term "relatively difficult".
Yet somehow, it seems to me that most software - including all the "innovative" hot companies - are mostly rewriting what came before, just in a different tech stack. So how come nobody wants to rewrite the prior art to be faster than it was before?
Rewrites can be really amazing if you incentivize it that way. Its really important to have a solid reason for doing a rewrite though. But if there are good reasons, the problem of 0 (or < x) downtime migrations is an opportunity to do some solid engineering work.
Anecdotally, a lot of rewrites happen for the wrong reasons, usually NIH or churn. The key to a good rewrite is understanding the current system really well, without that its very hard to work with it let alone replace it.
https://tonsky.me/blog/good-times-weak-men/
Another take: rewrites and rehashes tend to be bad because they are not exciting for programmers. Everything you re about to write is predictable, nothing looks Clearly better and it just feels forced. First versions of anything are exciting, the possibilities are endless, and even if the choices along the path are suboptimal, they are willing to make it work right.