No offense, but I think it's disingenous to compare the SPA movement with a guy with vague graphs that look like they suggest going back to what people used to do 5-10 years ago.
The SPA movement "threw out" the notion of ajaxing HTML snippets in favor of ajaxing structured data for many reasons: better separation of concerns, better asset cacheability, better defaults against XSS, better infrastructure for multi-client architectures, the list goes on. I'd argue that security w/ data endpoints is far easier to audit and reason about than the old school RPC-style send-me-html-when-I-do-X server interfaces.
But when do you update the view? Angular dirty-checks the model on each $digest cycle. React dirty-checks the view.
Why not simply require the component to call a function to indicate it's changed a couple variables in its state? Simply keep references to the DOM elements in your component and update them. It's much faster and gives you more control, and a programmer who forgets to write the function call will realize it as soon as they don't see the update. The IDE or linter can even have a static analyzer that flags a missing call to the function.
I think that a lot of times these attempts at convenience (such as Angular's two way binding, or the virtual DOM) just throw more layers of complexity and slow things down from a relatively simple straightforward approach, while providing little other than saving keystrokes.
> Why not simply require the component to call a function to indicate it's changed a couple variables in its state
This is actually roughly what knockout and ember (pre-glimmer) do. They are known as KVO (key-value observer) systems and have implementation challenges of their own: knowing when to batch operations, dealing w/ computed properties and rx glitches (in reactive systems, a "glitch" is the name given to temporary inconsistencies that occur between stable states), and added complexity in terms of requiring the model layer to be observable-based (as opposed to POJOs in Angular/React/friends).
Also, high quality KVO systems are far more difficult to implement. To my knowledge, Vue is currently the fastest KVO-based javascript library in existence and in order to support its POJO-like model API, it's significantly larger than Snabbdom (which is one of the fastest vdom implementations currently, despite clocking at a mere 200-300 LOC)
AFAIK, most of the challenges faced by KVO system have not been as extensively explored (at least by the javascript community) as virtual dom algorithm optimizations have, so currently I believe high quality virtual dom libraries are likely to perform better at various real life scenarios than current state-of-art KVO systems.
Right. But why not just make the app developer explicitly specify that some variables in the state have changed, and let the developer and requestAnmationFrame do the batching? After all, they are the best positioned to know when a batched update has been done.
Letting the developer tell the engine when to batch is actually not the hard part. React works exactly like that, for example. Mithril defaults to most-common-scenario call profile, but mostly as a matter of convenience, not because there's anything inherently difficult about exposing that flexibility to the developer.
The pain point that templating engines address is automating the process of figuring out what DOM changes are caused by what state changes. In order to do that, you have to either dirty check the state tree (as Angular 1 does), dirty check the template tree (as React/Mithril/vdom does), or have an observable state tree (as Knockout does). If the templating engine defers its responsibility to the developer, then if, for example, you do a `dataList.pop()`, you are responsible for writing out the code to remove the last item in a DOMNodeList, or updating some count label, or whatever else the view may be doing. This works ok in a small app, but it tends to become hard to maintain as a codebase grows in size (due to requirement changes or whatever).
Remember, tools are reusable and the tool's developer is the one who has to write that code. The app developer just plops the tool on a page and it renders itself.
Most tools don't need 60fps efficiency for animations. they would implement a simple .refresh() method (similar to React's render method, except without the virtual DOM). When the tool is first activated, it typically renders any HTML it needs to inside this method, unless the HTML was already there (because eg it was rendered server-side). It typically renders a template with some fields fromthe state, just like in Ember.
Right after this, the tool usually just stores references to elements it wants to update. For example,
tool.$foo = tool.$(".someNode");
And then when it comes time to update, you just do:
This event occurs whenever either x or y was reported as changed. Our framework would make sure these onStateChanged events are triggered at most once per animation frame.
In your example, I'd signal that some array changed, and the tool would figure out what changed via dirty checking. But why do that crap? Why dirty-check at all? In our framework, we have streams and messages posted to the streams which are supposed to say what changed. A move was made in a chess game. A person wrote a chat messge. These things are updates, which are hard to represent in Angular and React as mere maps from plain data to the DOM.
What's wrong with that? Any tool developer can add event listeners for when something changes, and do whatever update thy have to do. The app developer just updates a tool's state and it just works. If you need 60fps or just want to render 1,000 constantly updating tools on a page (BAD IDEA) then you can do it.
I guess I should have said that the rest of our framework uses the same concepts. Instead of syncing data like Firebase or Parse, we treat data as streams onto which one or more collaborating users post messages. We take care of making sure the order of the messages is the same everywhere, and we take care of a ton more things such as pushing realtime updates, managing subscriptions, access control etc. All you have to do is implement the visual change when the server says a chess move has been made, etc.
We even have a convention for "optimistic" changes that assume that your POST succeeded by simulating messages that should have come in. Once in a while, the assumption is violated, eg if another user posts another message in the meantime, or the server becomes unreachable. Then we .refresh() the stream to its last confirmed state and all tools automatically refresh also because they have event listeners on that Q.Stream object's onRefresh event. Then your tool may want to retry the pending actions with OT, ask the user, or whatever. And so forth.
Have you seen any framework with this straightforward model?
> In your example, I'd signal that some array changed, and the tool would figure out what changed via dirty checking. But why do that crap?
The example I usually use is a data table (sortable columns, filtering, batch delete, pagination, etc. you know, the usual suspects). The main benefit of a templating engine is that you don't need to write various routines to do each variation of DOM manipulation and worry about the various permutations, you just write the template once.
Personally, I prefer to not rely on querySelector if possible in order to avoid code smells related to identifying elements in a page with reused components, and I prefer to avoid observables because I think that their "come-from" quality makes them harder to debug.
In addition, I feel the declarative nature of virtual dom enables a level of expressiveness and refactorability (particularly wrt composable components) that is difficult to convey to someone who's more used to procedural view manipulation.
> Have you seen any framework with this straightforward model?
Yes, I believe Flight is similarly event-based, and Backbone can be used like that, too, pretty much out of the box.
Being a framework author, I take great interest in improvements in framework design, but to be honest, I haven't gotten much out of event-based systems and I generally feel like they are a step backwards from virtual dom in a number of areas. Mind you, I'm not saying that event-based frameworks are bad. Plenty of Backbone codebases work just fine, and if your framework works for you, then that's great.
What is really the difference between this vaunted "declarative" syntax:
<div>{{moo*2}}</div>
Which is then picked up by the framework to do two way databinding, dirty checking and other inefficient stuff it assumes you want, vs the equally declarative:
<div class="foo"></div>
And then have the component's code look for ".foo", save the reference and update it when the state changes? It's easy for the component developer to know what to do, after all, and it might involve more than just text replacement. Angular has filters as a poor man's declarative version of functions.
The trick to fast rendering is: don't read from the DOM, throttle DOM updates to requestAnimationFrame, and turn off fancy css when animating.
It's true that without two-way databinding, you have to read from the DOM. Maybe in that sense two-way databinding is good, but when should the update happen? Onchange?
Well, the difference has already been explained to some extent in other comments. In the first snippet using a templating engine, the engine automates the DOM manipulation. There is no procedural "and-then-have-the-component's-code-do-X" step.
In the second snippet you're responsible for writing that code and making sure that your `.foo` query didn't unintentionally pick up some other random DOM element, that you don't have logical conflicts in the case of some code relying on the class name's presence and other code toggling it, etc.
Re: performance, I think today it would be wise to start questioning the idea of hand-optimized DOM manipulation being faster than libraries, because most people aren't templating engine authors and don't know the tricks that those engines use, the algorithms to deal w/ the hairier cases, or what triggers deoptimizations in js JIT engines, whereas library authors are actively dealing with those concerns on a ongoing basis.
Two-way data binding is somewhat orthogonal to the templating engines' primary goals. All a 2-way data binding does is wrap a helper around an event handler (e.g. oninput) and a setAttribute call. But as I mentioned above, you don't need to use the bidirectional binding abstraction; you can have explicit re-render calls instead.
So they run into the typical problem of handling the "general case" in their own special way, and leaving you out in the cold for the other cases. Someone made a chess move? Someone queued a new song? Someone wrote a new chat message?
The SPA movement "threw out" the notion of ajaxing HTML snippets in favor of ajaxing structured data for many reasons: better separation of concerns, better asset cacheability, better defaults against XSS, better infrastructure for multi-client architectures, the list goes on. I'd argue that security w/ data endpoints is far easier to audit and reason about than the old school RPC-style send-me-html-when-I-do-X server interfaces.