The author assumes serving documents (i.e. compound data) is not RESTful and a "hack".
In the original Fielding paper, REST is explicitly defined as an architecture for "large-grained resources" aka "documents". So this "hack" the author perceives is inherent to REST.
If your API is small-grained (like most HTTP/web/JSON APIs are), then Fielding explicitly states you shouldn't be using REST.
Quite interestingly and not well-known to many it seems, HTTP/1.1 had many HTTP/2-like proposals like MULTIHEAD and MULTIGET methods. Fielding rejected them on the basis they're not RESTful.
The current praise of HTTP/2 as a major win for REST APIs shows how differently "REST" is understood by its proponents today compared to its original meaning. If doing REST today is about doing the opposite of what it was intended to do, it also raises the question why do we believe we get the same benefits REST did for its original definition? A quick read shows we don't, but the cultural ingraining in favor of "believing" is huge.
TLDR; Nothing to see here, keep doing what works.
P.S.: Server push has been deprecated, it's basically dead.
"RESTful" tends to be used as though it means "good" these days, as though the only good APIs are RESTful. I actually don't think RPC is a bad approach if it suits your problem domain, but designing an RPC API when you think you're not is... probably a bad thing.
REST as an architecture has a lot more in common with how we users traverse the web than how we imagine our machines to query an API. We go from page to page and interpret the content on each page to decide which link to follow next. The HTML media type is understood by our browsers, so we're presented with a consistent way of traversing, submitting forms, etc. that behaves consistently across any site that provides an HTML resource. And a website can update its pages as much as it wants, and as long as it's still providing HTML, a user can learn to navigate it again.
Of course, that genericity and reliance on human learning is exactly what makes HTML alone not a good media type for machine-to-machine REST APIs, but that's why media types are (supposed to be) a cornerstone of REST: they describe the schematic structure of a general kind of resource, specifying how such resources may relate to other resources and what other kinds of data they might contain, but leave it up to the individual resources to actually make use of those facilities (and up to the client to reason against those facilities).
But try finding a "RESTful" API that actually does that. Most of the ones I've seen actually use a bank of statically-known URLs that report a content-type of JSON whose actual fields are statically-known by separate specification. It doesn't really matter whether you're using HTTP or not, that's just RPC.
The whole "REST is good and everything else is dumb" is something I've tried to help people get away from for about a decade. These days it's GraphQL is good and everything else is dumb. See some of the comments in here.
> I wish anything built on REST would die. Conflation of the envelope and the payload is idiotic and adds only pain, no value.
Can you provide a link to the MULTIGET or MULTIHEAD proposals?
I'm very interested in getting GraphQL features into http and wasn't aware of these prior proposals. A quick web search didn't turn up anything for me on them either. Thank you!
> In the original Fielding paper, REST is explicitly defined as an architecture for "large-grained resources" aka "documents". So this "hack" the author perceives is inherent to REST. If your API is small-grained (like most HTTP/web/JSON APIs are), then you shouldn't be using REST in the first place
This reminds me of the time when people took a format meant for sprinkling some metadata here and there on a piece of a large document, and started using it primarily for writing "documents" where markup/data ratio is > 2.
If you're not using the full XML ecosystem then yes, it can be overly verbose for just a simple document, especially if the person who defined the format went all in on "XML-ising" it - the worst and most common offender being using elements to define lists of key-value pairs (especially if both have an element wrapped in another element!) instead of using attributes, followed by not specifying a default namespace so everything has "xml:" in front of it making it hard to work out what the hell the thing actually is as well as pointlessly inflating the file size.
When you're dealing with large amounts of structured data of different kinds that you want to combine in many different ways there's not really anything that comes close to the combination of XML, XPath, schemas, XSLT and XQuery. But if you're not something like the huge medical publisher I worked for then you definitely don't need all that.
I agree. I didn't mention this in my original comment, but the follow-up to the story is: the industry eventually moved on to leaner formats like JSON, but in the process, threw away all that made XML very useful over the years - like schemas, for example. Arguably, this is a step back.
I do maintain a couple of JSON schemas and they're a bit clunky due to the JSON syntax (although they're nicer in YAML which my validator can use) but the newer versions of the spec do allow you do specify complex enough rules to cover most cases. But an equivalent to XSLT doesn't make a huge amount of sense for JSON.
I've got an XSLT stylesheet I wrote you can add to an HTML page via the xml-stylesheet directive that turns it into a pretty-printed display of its own markup, complete with syntax highlighting. Pointless but neat that it's possible :)
Oh! This explains so much, particularly why I've always felt like most REST APIs aren't really... REST.
Development teams go "ok, we need to design a REST API", and, due to the inherent architectural characteristics of REST (large-grained) clashing with their own needs (small-grained) what they end up with is something more like some Frankenstein-esque REST-RPC monster.
If you make your quotes small enough you can make them mean anything, but I invite you to go take a look at the dissertation. "large-grained resources" does not mean "compound documents", the REST dissertation has no comment on compound documents or anything that resembles the concept, which came along later. It doesn't forbid them or allow them, but many of the concepts of REST suggest otherwise.
Beyond whether its RESTful API or not (grey area) it adds headaches, and is rather unnecessary with good API design anyway, which is what this article was about.
More importantly, compound documents mess with the "cacheability" of resources. A resource cannot declare its own cacheability (which REST suggests) through the uniform interface when its munged into a mismatch of other data. A foo and a bar and a baz are now all cached in together under the same headers with the same /foo?include=bar,baz identifer, meaning that if a bar is likely to change often you have to request the whole thing again. This is one of many reasons compound documents are a pain in the ass, but there are plenty more.
> The current praise of HTTP/2 as a major win for REST APIs shows how differently "REST" is understood by its proponents today compared to its original meaning. If doing REST today is about doing the opposite of what it was intended to do, it also raises the question why do we believe we get the same benefits REST did for its original definition? A quick read shows we don't, but the cultural ingraining in favor of "believing" is huge.
This whole paragraph makes no sense. I have been making APIs since ~2008 so let's not act like I'm brand new at this.
The point I was making here is that in the past people have moaned about REST because "making lots of requests is slow", which is a common comment from people who generally build their APIs poorly not taking into consideration what the consumers would be doing and just flopped out a very normalized database and forced clients to construct their own data models from that. It's also a concern from people who havent got the first clue what HATEOAS means or how it can help abstract state from multiple clients up into the backend. They just see it as "having to make more requests".
Now that the fear of multiple requests is subsiding, the exact same concepts that made REST beneficial for many APIs in the HTTP/1.x past exist now, they're just quicker in HTTP/2. And they're quicker in HTTP/3. REST and HTTP just get quicker and more useful over time.
In the original Fielding paper, REST is explicitly defined as an architecture for "large-grained resources" aka "documents". So this "hack" the author perceives is inherent to REST.
If your API is small-grained (like most HTTP/web/JSON APIs are), then Fielding explicitly states you shouldn't be using REST.
Quite interestingly and not well-known to many it seems, HTTP/1.1 had many HTTP/2-like proposals like MULTIHEAD and MULTIGET methods. Fielding rejected them on the basis they're not RESTful.
The current praise of HTTP/2 as a major win for REST APIs shows how differently "REST" is understood by its proponents today compared to its original meaning. If doing REST today is about doing the opposite of what it was intended to do, it also raises the question why do we believe we get the same benefits REST did for its original definition? A quick read shows we don't, but the cultural ingraining in favor of "believing" is huge.
TLDR; Nothing to see here, keep doing what works.
P.S.: Server push has been deprecated, it's basically dead.