Hacker Newsnew | past | comments | ask | show | jobs | submit | satellite2's commentslogin

Mmh yes and no. There are two arms in this hypothesis and you can summarize the second one (the main one) as basically wanting to piss off / punish the Europeans. This is perfectly on point and fits the character.

Occam's razor might well describe actions of a cold calculating leader. But of Trump..?


Interesting. When it's the state I think the overwhelming opinion is that predictive policing is dangerous but when it's a private company we actually want it to enforce it?


They could not be held accountable to warn her if they had not done the analysis. They did. Their organizational conclusion was that it was potentially an unsafe trip. Shit, they could have just cancelled the ride dynamically and re-assigned her. Why wouldn’t they do that? It’d probably be more expensive. Maybe they’d get more cancelled rides. Maybe this woman wouldn’t have been raped by an agent of Uber selected for and sent to her by them.


Wouldn't they then expose themselves to discrimination and loss of revenue lawsuits from targeted drivers?


It depends. Are the inputs to the algorithm themselves discriminatory? If so, then yes that would be appropriate. But that is a different conversation. They determined the passenger may be unsafe and did nothing.

Mind you, these companies work very hard for us to not know how they match A to B, usually so we don’t notice things like their disregard for safety.


The inputs wouldn’t even matter; the inputs could even be above reproach but if there were disparate impacts in terms of outcomes, the case for liability could be made.


Maybe, but they’re clearly liable for not using the information.


Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those?


It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage.

So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always

The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode.


I've noticed this too and am wondering why this hasn't been baked into the popular agents yet. Or maybe it has and it just hasn't panned out?


Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation.


AFAICT this is already baked into the GitHub Copilot agent. I read its sessions pretty often and reviewing/testing after writing code is a standard part of its workflow almost every time. It's kind of wild seeing how diligent it is even with the most trivial of changes.


You have to dump the context window for the review to work good.


Exactly, his whole tirade felt extraordinarily far fetched, sketchy if not outright racist.


Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.


Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.


I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.


Alternatively:

  enclave, err := secret.GetEnclave()
  // err contains whether the platform doesn't support it
  enclave.Do(f)


Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.


Are those really standardized in the US?

Where I live the condition vary widely. And basically the switching costs might easily dominate the total costs if you move/sell.

I've found that taking this into account it was better to trade a few places in term of interests for better conditions.


Yes, extremely, especially for confirming loans: https://singlefamily.fanniemae.com/originating-underwriting/...

Patrick McKenzie (https://news.ycombinator.com/user?id=patio11) has a great deep dive on this: https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...

Closing/switching costs are certainly a consideration still, but the "Truth in Lending Act" (TILA) made it easier to compare the all-in cost by providing a standardized APR number, which is what the dashboard focuses on.


* conforming loans


Dynamic led plate that are totp. Where you can determine who is who on which date only with central access.


Of course. But what if the holding lives in a country that don't enforce this (or is too weak to). Then all the subsidiaries are really sovereign from the host country perspective.

It seems the solution is ages old. Don't have the holding incorporated in an empire...


How would this work in practice? If the empire wants to get at your data, why do you think it would shy away from pressuring a country so weak that it can't afford to enforce this on their companies?


Then the empire just says that they want the data or you won't be allowed to operate in the empire, which would be bad for profits and anger shareholders.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: