Hacker Newsnew | past | comments | ask | show | jobs | submit | sahil-kang's commentslogin

I haven't seen it implemented anywhere, but that sounds like the "pagetable displacement" approach described here: https://wiki.osdev.org/IPC_Data_Copying_methods#Pagetable_di...

The same idea occurred to me a while ago too, which is how I originally found that link :)


How performant is that in practice? I thought setting pages was a fairly expensive process. Using a statically mapped circular buffer makes more sense to me at least.

Disclaimer: I don't actually know what I'm talking about, lol


To be clear, since the other replies to you don't seem to be mentioning it, the major costs of MMU page-based virtual memory are never about setting the page metadata. In any instance of remapping, TLB shootdowns and subsequent misses hurt. Page remapping is still very useful for large buffers, and other costs can be controlled based on intended usage, but smaller buffers should use other methods.

(Of course I'm being vague about the cutoff for "large" and "smaller" buffers. Always benchmark!)


You can pretty reliably do it on the order of 1 us on a modern desktop processor. If you use a level 2 sized mapping table entry of say 2 MB, that is a transfer speed on the order of 2 TB/s or ~32x faster than RAM for a single core even if you only move a single level 2 sized mapping table entry. If you transfer multiple in one go or use say a level 3 sized mapping table entry of 1 GB that would be 1 PB/s or ~16,000x faster than RAM or ~20x the full memory bandwidth of a entire H200 GPU.


Pretty quick, far faster than inter-process memory copy. The only way to be sure would be to set it up and to measure it, but on a 486/33 I could do this ~200K per second, on modern systems it should be a lot faster than that, more so if the processe(s) do not use FP. But I never actually tried setting up say a /dev/null implementation that used this, it would be an interesting experiment.


fwiw, I normally think of LMDB when the idea of using the fs instead of a db crosses my mind


If I understand correctly, the main selling point of htmx is that _html_ is extended with the attributes from GP: the idea being that a bulk of the interactivity of SPAs can be achieved via hypertext alone

I think a more precise reading of GP's "changes just 4 things in browser behavior" is "changes just 4 things in html behavior"


> the bulk of the interactivity of SPAs can be achieved via hypertext alone

except that with htmx the backend is supposed to return htmx markup, so now the hypertext is smeared between two systems. this lack of separation is the main thing holding me back from using it in any serious effort.

it feels like the "full stack dev" problem writ large. should my backend devs also be experts in front end? should my front end devs be digging into backend code? I'm a generalist myself but it's not reasonable to expect everyone to be a generalist, nor does it make good business sense.

then there's the backend-for-frontend argument, which the manager in me always reads as "we need to double the number of backend services". it's a bit hyperbolic but still.


You should be using the same templates/partials/fragments/components that render in your initial page load as your responses to page updates.

So to render a table you render each row from a row component - then in response to updating a row in the table your backend end point returns the row component and then htmx swaps it out and rebinds everything.

Also, one big aim of htmx and this approach is to remerge the backend and frontend, like the old days.

This is the aim of HATEOS (hypermedia as the engine of state) and if you came up in web dev in the past 12 years or so then it’s going to feel very alien.

And honestly? Yes I think everyone should be a generalist otherwise you have just siloed your stack in away that increases both tech and business risk. Sure have someone who is an expert where needed but they should also be able to touch the full stack.

Be a T, broad and deep on one thing.


If I was to tackle a simplified view of the problem I think you're describing: your frontend devs should provide the markup template the backend would interpolate and return

In the scenario you've alluded to, your backend devs are currently producing json data and your frontend devs are interpolating that into markup in the browser. In the simplest case then, your frontend devs would just provide a markup template that can be interpolated with the json already being produced. In slightly less simple cases, they can provide a function to be called instead

The gist is that the logic of taking data and producing markup should remain in the frontend dev's wheelhouse


with JSON, frontend devs can ignore chunks of it, or work with the backend devs to modify the payload.

HTML, being a representation of a desired state rather than a neutral information exchange medium. is tricky to do that with. the frontend and backend devs would have to remain in lockstep with any changes made to the payload, ie the frontend and backend applications become tightly coupled.

I don't really see how having front end devs hand off a spec saying "we need this exact result format" is better than a loosely coupled result in a standard format


In a team where the backend devs can't work with the HTML templates for whatever reason, the frontend devs would be directly managing those.

I definitely wouldn't ask my frontend dev to write a spec and hand it to me to make their template, the spec spec would effectively have to be the template source code anyway. Just get in there and work with the HTML templates themselves.


The backend is expected to return html. HTMX is not a markup language


Which begs the question: Why mix logic with templates?


I think this htmx essay [1] addresses the tradeoffs you may be getting at if you're thinking "why html vs js+json": the gist is that html is self-describing to the browser, so the backend can make changes to the html it produces and the browser will understand it without the frontend code needing to be updated

If you're instead thinking more broadly in terms of "structure vs layout", I think the same reasoning for using something like tailwind css or higher-level web components may apply: i.e. the material you interact with is closer to your problem domain

[1] https://htmx.org/essays/two-approaches-to-decoupling/


htmx mixes logic with "templates" in the same manner that HTML mixes logic with templates: hypermedia control information is embedded in the hypermedia in the form of attributes


Maybe I should clarify the type of logic I mean. Standard HTML contains instructions for how content should be rendered. But it has no control flow structures to control what content should be rendered.

Embedding loops and if/else logic in htmx tags creates scenarios where the content is potentially modified in the rendering step, meaning you can't rely on just looking at the data the backend sent over the wire to determine the source of a result that renders incorrectly. Instead of having control flow in one place, and a single source of truth, it creates an additional point of failure.

Stock price showing up wrong? That might now be a problem in your backend logic or in your complex htmx engine, buried in some template tag.


You don't need to mix logic with templates. I just produce a context (result from database queries and some such) and pass it to a function that produces HTML from the data in the context. The data must not be changed in the templating function. This is something I try to avoid as well in order to maintain separation of concerns in the backend code.

Edit for clarification: The context holds the data that is rendered as-is by the templating function. So any transformations from the "raw" database data and the context data happens beforehand and the templating function just turns it into HTML.


I can understand the failure to build a critical mass of contributors, but can you share some examples of where your work was duplicated instead of built upon?

I’ve read through some of your async work in the past and from an initial glance, it seemed like you had the right idea by wrapping existing event libs and exposing familiar event loop idioms. At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.


> can you share some examples of where your work was duplicated instead of built upon?

Wookie (http://wookie.lyonbros.com/) was the main one, or at least the most obnoxious to me. I was trying to create a general-purpose HTTP application server on top of cl-async. Without naming any specific projects, it was duplicated because it (and/or cl-async) wasn't fast enough.

> At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.

A superficial need for raw performance seemed to be the biggest reason. The thing is, I wasn't opposed at all to discussions and contributions regarding performance. I was all about it.

Oh well.


If you found this interesting and want more, then I think this is a good follow up to read next [1]. It takes a similar approach to introducing the code-as-data idea, but uses xml instead of json as an introductory structure.

[1] http://www.defmacro.org/ramblings/lisp.html


Author here: Am a fan of Slava, and loved his essay on Lisp. Highly recommend!


Agree - Slava’s article has gotten a lot of action on hn over the years.

(https://hn.algolia.com/?q=http%3A%2F%2Fwww.defmacro.org%2Fra... 0)


I think the kind of licensing you’re referring to is usually called permissive licensing [1]

[1] https://en.wikipedia.org/wiki/Permissive_software_license


I am merely stating that less restrictions = more freedom (ie more choices of what you can do) to the initial user. I'm also pointing out that this freedom of choices does not translate to downstream users since the previous user is free to choose to restrict it. GPL has less freedom (by virtue of adding restrictions on what can be done), but it does this to make sure all users have equal freedom. Some people value the higher freedom more, other people value the equality more.

Separately, I was pointing out that contributing back upstream is a beneficial and nice thing to do, but is orthogonal to freedom. If anything, forcing it means there is less since the person loses the freedom to choose whether they wish to or not. I'm not judging which is better either, just pointing out the differences. Sometimes less freedom for the individual is better for the whole (to ensure equal freedom for all, or so changes are contributed back), we see it in society too: we give up personal freedom for the benefit of society as a whole.


Here are some examples showing Common Lisp’s built-in support for multidimensional arrays: https://lispcookbook.github.io/cl-cookbook/arrays.html


Can you share more info about the parentheses being a hazing test? I’ve seen Dylan syntax [1], but is there something else that shows the parentheses to be unnecessary in 2020?

[1] https://en.wikipedia.org/wiki/Dylan_(programming_language)


I've had good experience with Closure's cautious approach to parentheses.

It's not radical, in that it's still basically parentheses but their nesting is greatly reduced by combination of tricks:

Flattening, if reversible. E.g. if something's always a list of pairs, then it's represented as a flat list with pairing inferred.

Having shorthands in syntax, i.e. [a b c] is (vec '(a b c)).


In my admittedly limited experience with Clojure a few years ago, IIRC it seemed like it just replaced some parentheses with square brackets, but did not actually reduce the number of total brackets by much.


Yes, the kernel does remove unmaintained/unused code according to GKH at 30:30 in this interview [1].

[1] https://youtube.com/watch?t=1830&v=t9MjGziRw-c


I think you can draw this ‘core of an environment’ parallel between Forth and Scheme: both are small languages and they emphasize growing the language to the problem domain [1]. Common Lisp, on the other hand, is a large language: implementations provide much more than a foundational core, and a fairly comprehensive list of libraries exists. I think RPG’s Worse Is Better highlights some of the reasons why CL isn’t as popular as other languages [2].

[1] https://youtube.com/watch?v=_ahvzDzKdB0

[2] https://www.dreamsongs.com/WIB.html


Off topic, but this (from RPG's Worse is Better) sounds very familiar:

> Part of the problem stems from our very dear friends in the artificial intelligence (AI) business. AI has a number of good approaches to formalizing human knowledge and problem solving behavior. However, AI does not provide a panacea in any area of its applicability. Some early promoters of AI to the commercial world raised expectation levels too high. These expectations had to do with the effectiveness and deliverability of expert-system-based applications.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: