Hacker Newsnew | past | comments | ask | show | jobs | submit | LudwigNagasena's commentslogin

Why shouldn’t it boil down to “whataboutism”, aka comparison and putting things into context? Especially during UK’s obvious slide in to disguised authoritarianism.

One can also ask how HK ended up with English language and common law in the first place… though that wasn’t so recent.


It doesn't show that they "struggle". It shows that they don't behave according to modern standards. I wouldn't put much weight into an industry without sensible scientific base that used to classify homosexuality as a disease not so long ago. The external validity of the study is dubious, let's see comparison to no therapy, alternative therapy, standard therapy; and then compare success rates.

Surely Marx would disagree with such assessment and call it idealistic and not grounded in material reality?

There is no reason to buy into the whole Marxist framework just because you share one single sentiment that various thinkers had before and after him.

> one single sentiment

Lol alienation of labor is not a single "sentiment" - it's a core principle. So like it or not you share a core principle with Marx.


The sentiment is shared with Jean-Jacques Rousseau, Adam Smith, Wilhelm von Ketteler, Louis Blanc and probably lots of other less known people. Marx's theory of alienation is far more developed and nuanced than the generic cog-in-the-machine critique that is explored by many other people of various political inclination, not only Marx.

> sentiment

...

> theory

these two words aren't interchangeable

> Jean-Jacques Rousseau, Adam Smith, Wilhelm von Ketteler, Louis Blanc

...

> generic cog-in-the-machine critique that is explored by many other people

literally only one of the names you mentioned were writing post industrial revolution - the rest had literally no notion of "cog in the machine"

you're trying so hard to disprove basically an established fact: Marx's critique of exploitation of labor post industrial revolution is certainly original and significant in his own work and those that followed.


> these two words aren't interchangeable

Exactly. That's why you can't jump from "people don't feel like they own their labor" and "people bemoan their boss" to Marx's theory of alienation.

> literally only one of the names you mentioned were writing post industrial revolution - the rest had literally no notion of "cog in the machine"

But the very framing that this is an ill that is unique to industrial society is Marxist. Slavery, corveé labor, taxes, poor laborers, marginalisation existed for thousands years in one form or another.

> you're trying so hard to disprove basically an established fact: Marx's critique of exploitation of labor post industrial revolution is certainly original and significant in his own work and those that followed.

I don't dispute that Marx's critique of exploitation of labor post industrial revolution is original or significant. I dispute your claim that people who share similar sentiment have to agree with Marx's theory of alienation.


Consistency, simplicity, RPC semantics.


Sending a GET request with a body is just asking for all sorts of weird caching and processing issues.


I get the GPs suggestion is non-conventional but I don’t see why it would cause caching issues.

If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.

Caching is also generally managed via HTTP headers, and you also have control over them.

Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.

I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.

Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.


To be clear, it's less of a "suggestion" and more of a report of something I've come across in the wild.

And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.


Cache in web middleware like Apache or nginx by default ignores GET request body, which may lead to bugs and security vulnerabilities.


But as I said, you control that infra.

I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.

And there isn’t any good reason not to contract pen testers to check over everything afterwards.


> I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)

> And there isn’t any good reason not to contract pen testers to check over everything afterwards.

I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.


> Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist

No. The correct way to set up this infra is the way that works for a particular problem while still being secure.

If you’re so inflexible as an engineer that you cannot set up caching correctly for a specific edge case because it breaks you’re preferred assumptions, then you’re not a very good engineer.

> and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack"

Once again, you have control over the implementations you use in your infra.

Also It’s not a RSA if the request is supposed to contain a payload in the body.

> I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.

I wouldn’t be so sure. I’ve worked with a plethora of different infosec folk from those who will mandate that PostgreSQL needs to use non-standard ports because of mandating strict compliance with NIST, even for low risk reports. To others that have been fine with some pretty massive deviations from traditionally recommended best practices.

The good infosec guys, and good platform engineers too, don’t look at things in black and white like you are. They build up a risk assessment and judge each deviation on its own merit. Thus GET body payloads might make sense in some specific scenarios.

This doesn’t mean that everyone should do it nor that it’s a good idea outside of those niche circumstances. But it does mean that you shouldn’t hold on to these rigid rules like gospel truths. Sometimes the most pragmatic solution is the unconditional one.

That all said, I can’t think of any specific circumstance where you’d want to do this kind of hack. But that doesn’t mean that reasonable circumstances would never exist.


I work as an SRE and would fight tooth and nail against this. Not because I can’t do it, but because it’s a terrible idea.

For one, you’re wrong about TLS meaning only your infra and the client matter. Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic. The one I saw was Bluecoat, no idea if it follows your expected out-of-spec behavior or not.

For two, this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement. If you move to AWS, do ELBs support this? If security wants you to use Envoy for a service mesh, is it going to support this? I don’t pick all the tools we use, so there’s a good chance corporate mandates something incompatible with this.

You would need very good answers to why this is the only solution and is a mandatory feature. Why can’t we cache server side, or implement our own caching in the front end for POST requests? I can’t think of any situations where I would rather maintain what is practically a very similar fork of HTTP than implement my own caching.


> Not because I can’t do it, but because it’s a terrible idea.

To be clear, I'm not advocating it as a solution either. I'm just saying all the arguments being made for why this wouldn't work are solvable. Just like you've said there that it's doable.

> Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic.

I did actually consider this problem too but I've not seen this practice in a number of years now. Though that might be more luck on my part than a change in industry trends.

> this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement.

I would imagine if you were forced into a position where you'd need to do this, you'd be able to address those underlying limitations when you come to the stage that you're re-implementing parts of the wider application.

> If you move to AWS, do ELBs support this?

Yes they do. I've actually had to solve similar problems quite a few times in AWS over the years when working on broadcast systems, and later, medical systems: UDP protocols, non-standard HTTP traffic, client certificates, etc. Usually, the answer is an NLB rather than ALB.

> You would need very good answers to why this is the only solution and is a mandatory feature.

Indeed


There is no secure notion of "correctly" that goes both directly against specs and de facto standards. I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely. I personally don't even know which L7 balancer my company uses and how it would cache GET requests with bodies, because I don't have to waste time on such things.

Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation. Just don't make your life harder for no reason.


> There is no secure notion of "correctly" that goes both directly against specs and de facto standards.

That's clearly not true. You're now just exaggerating to make a point.

> I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely

You don't need application-layer support to load balance HTTP traffic securely and timely.

> Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

I didn't say PostgreSQL listening port was a standard. I was using that as an example to show the range of different risk appetites I've worked to.

Though I would argue that PostgreSQL has a de facto standard port number. And thus by your own reasoning, running that on a different port would be "insecure" - which clearly is BS (as in this rebuttal). Hence why I called your "de facto" comment an exaggeration.

> There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation.

...but we are talking about this as a theoretical, unavoidable technical limitation. At no point was anyone suggesting this should be the normal way to send GET requests.

Hence why i said you're looking at this far too black and white.

My point was just that it is technically possible when you said it wasn't. But "technically possible" != "sensible idea under normal conditions"


Ancient Greeks attributed Mycenaean remains to the “Age of Heroes”. They were amazed by the scale and engineering quality of the work and thought it was done by gods and mythical creatures such as Cyclopes. They didn’t approach progress linearly or mono-dimensionally.

Heinrich Schliemann was probably the first to connect the myths with tangible proof through archeology in late 19th century. While Lévi-Strauss work was much later and more political and polemical rather than scientific.


Yeah, the “Age of Heroes” was just Ancient Aliens for the Greeks: “we can’t do it, so it can’t be human work”


Exactly, it wasn’t some view to the past as the root of great culture.

If you read the actual Polybius you’ll see that there was no ideas of evolution or that we’re in the same category as other living things


Because the EU citizens keep voting for those politicians. It’s as simple as that. There are dozens of different parties in each EU country, but people keep voting for parties that push chat control.


Sad to see Europe morph from postal secrecy to chat control. I can’t imagine 19th century intellectuals would do anything other than laugh in the face of censors who would suggest that the governments need to read personal correspondence to protect children and/or national interests against Prussia/Russia/China.


The EU parliament and the head of states that comprise the EU council are elected by the EU citizens. Why is there such discordance between the two? Isn’t it mostly the same people from the same parties?


Because in a democracy it's the legislative assembly - parliament - that decides on laws, not the executive.

The EU commission is the executive and represents the currently in power government, NOT parliament.


Sure, but I'm asking why there is a discordance between them.


In this specific case - because Parliament has rejected this law 50 times. Yes, really that much, they've been at it since 11 May 2022. Governments really want access to people's chats, and parliament really does not want to give it to them. Not the EU parliament, not member state parliaments. Almost all of them.

The EU Commission has the power to force this law through, over the objections of all EU member state parliaments and the EU parliament and only the EU Council has the power to stop them. So by allying with the council the EU Commission is hoping to force parliament by threatening them with worse, and showing that the Council will not intervene if they do force worse through.

In general there is discordance, because the EU countries are forced into the union because of the need to compete with other large players like US and China, not because they want to. So every EU country wants to be part of the EU, but they don't want that to mean anything and don't want to give up even the slightest bit of control to the EU. And that's ignoring animosities, such as that France really does not want Germany to have any say whatsoever in what happens in France, or that Spain wants to repress Barcelona's separatism and 100 other issues.

The EU constitution disaster, and that the outcome of that effort was literally worse than the EU imagined was even possible when they started it is a good illustration of the problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: