Hacker Newsnew | past | comments | ask | show | jobs | submit | dan-robertson's commentslogin

I think there are a big range of opinions people have. There are some hardcore housing resisters whose opinions get a lot of sway because of the way processes work (public consultations, activism, etc). Lots of people are a bit sceptical because of pretty legitimate reasons – noise, traffic, disruption, aesthetics.

I think there probably are balances where people could generally be happier with new construction and that opinion could be clear enough to overrule those who would never be happy with it. Things like:

- ways of having locals vote on new development with small enough constituencies that they can be paid off (ie some of the gains that would have gone to developers or other positive externalities can be captured by those who are more effected) with lower taxes or new roads or parks or whatever

- making residents vote instead of having consultations will lead to less bias in favour of the most obnoxious

- allowing apartment blocks to vote to accept offers of redevelopment (eg you get a newer apartment; more apartments are added to the block and sold to fund the redevelopment)

- having architectural standards that locals are happy with for new buildings

- allow streets to vote to upzone themselves (I don’t love this as it’s basically prisoners dilemma – if your street does it, land value increases and you gain; if every street does it land value only increases a bit but now you are upzoned)

I basically think that there are developments that can be broadly appealing and we are in a bad local minimum in lots of places of having bigger governments trying to push development on unwilling smaller governments/groups


Yet I'm not sure it comes from more localized decision-making. It might actually come from making the rules clearer and less discretionary


Isn’t the initial response to a lack of housing that people consume less housing than they would like, rather than homelessness, eg families with children sharing rooms more than they might like, adults living with roommates, or just people having to live further away from where they would like to be (or moving out of a city altogether)?

I don’t dispute that there are levels of affordability that are bad enough that they start to lead to various forms of homelessness, but it doesn’t seem to me like a fundamental rule that, if some people can’t afford to live alone in a large amount of housing, they also can’t afford to live with roommates sharing a smaller amount of housing, and that the right level of housing prices should also price some people out of those arrangements (ie it demands a pretty high level of inequality if you assume that the market allows typical people to afford to live alone and that sharing can typically reduce per-person rents by half or more)


It’s all fun and games games until you’re municipality starts banning adult co-living situations.

You underestimate how intrusive these people will be to protect the value of the single largest asset most of them will ever own


Individual landlords also dislike those and may not allow it, because the ability of the household to make rent now depends on all adults in that household. 3-4 broke adults who may only loosely know each other and with their own individual plans are a lot less stable than, for example, married couples. It's basically 3x the risk and hassle. Chances are you'd have to evict them after just a couple months.

They'd rather just leave the apartment empty and hope to find a better tenant.


> municipality starts banning adult co-living situations

Is that not typically happening only for more egregious situations? E.g. the ban is on more than 3 or 4 unrelated adults living together. There are plausible reasons why that should be regulated, it is not clearly a conspiracy by homeowners to prop up the value of their own homes.


The government intervention in the markets for agricultural products is because farmers are often an important political base, not to keep prices low for consumers. Without the intervention, prices would fall and supply would have to fall too, which would be good for consumers and bad for producers.


I'd argue that government intervenes in agriculture for supply-chain safety first and foremost.

Governments are concerned with this because no meat (or rice) on supermarket shelves during a supply-shock of any kind leads to rapid unplanned regimechange very consistently.

So basically every western government throws a few billions a year at local farmers so they stay (and this is not a US phenomenon, EU does the exact same thing). This is also less necessary for countries where labor is cheap.

This is arguably good for most consumers because essential calories being more affordable "on average" is not good enough, you'd typically rather pay (partially indirectly) a higher average to guarantee availability and avoid starving.


And the biggest reason is, learning from history, the instability of food prices destabilizes nations.

There have been many “bread riots.”


Bread is much cheaper today (at least relative to incomes) than in the days of bread riots.



Wheat is HEAVILY subsidized.


This and companies like paddy power are the breakout British business successes of the last few decades


Each region’s greatest successes represent them quite well:

UK: gambling and porn

Europe: luxury goods

US: advertising and retail e-commerce

China: direct from manufacturer commerce


I’m not so convinced by this. In the uk I think the big home grown successes are either serving the local market physically like supermarkets (though they are not very new as companies) or they will be relying on some difference in regulations from other anglophone countries. A business not in those categories needn’t be worse in general terms to lose to an American company, but the American competitor will have lots of advantages (cheaper fundraising, bigger market with more discretionary spending, more pro-business political environment, etc).


Why does being a top AI researcher so often come with this philosophical bent you describe?


You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place


it's not working


US isn't randomly launching nukes yet


yet.

Because so far if we left it to AI they would be much quicker to do it [1]

[1] https://www.newscientist.com/article/2516885-ais-cant-stop-r...


virtue signaling is the goal and its working


Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…


Most evil corporations have fairly normal jobs available.


if you want to make the world a better place as OP stated perhaps you can get a normal job in maybe less evil corp?


Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway).


When presented with a choice between:

1. Take a job making $$$$$$$ at a company making the world worse.

2. Take a job making $$$ at a company not making the world worse.

Very few people have a personality such that they'll pick 2.


exactly what I was asking OP, her/his comment sounded like people will pick the later (I agree with you)


Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

But it is absurd to claim it is "making the world better place".


I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those.


not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance.


I would think it's because of the staggering money they're making. According to Fortune[0]:

> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

[0] https://archive.ph/lBIyY


I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?


My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.


> care more that their work is prosocial

These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.

Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.


Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way.

It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET

====

The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113]


So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population.

Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying.


I never claimed any techology is good or bad; you also seem to be in agreement with me that technology used in warfare _can_ have "good" applications (I mentioned that the benefits are secondary to their applications in war, that doesn't sound like me saying there are no benefits).

Lastly, the only point I was trying to make is that the argument that researchers do these things for "pro-social" causes is kind of a facade; the macro environment that incentivizes technological development *is* mostly due to government investment. Sure, the individuals working on it may all have different motivations, but they wouldn't be able to do so without large sums of money. The CIA [1] literally has a venture capital firm dedicated to the investing in the development of technology - do you really believe they are doing that to help people?

- [1]: https://fortune.com/2025/07/29/in-q-tel-cia-venture-capital-...


This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.


Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.


Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

Note this doesn’t apply to everyone. Some people just want to make money.


Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?


Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.

“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.


Because a lot of them are academics that are doctors of philosophy


Because they can afford it, they are very sought after.

And smart people usually have moral convictions.

I know for some people on this website it's hard to understand, but not everything in life is about $$$


> And smart people usually have moral convictions.

Are you sure you don't just like the moral convictions and so engage in trait bundling?

Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.


> Moral knowledge doesn't really exist.

If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?


Different kinds of claims, it's not self-referential


> Different kinds of claims

How so?

If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting.

There is no fact-value dichotomy.

And one more thing...

> the lack of falsifiability

Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which is problematic even part from this particular case, as anyone who has done any serious reading in the philosophy of science knows).


> And smart people usually have moral convictions.

Dumb people have moral convictions. Smart people see the nuance.


I'm smart and you can buy my morals. So what?


Those people get paid so much anyway that they don't have to compromise their morals.

I guess that's not the case for you and me


so do oil and tobacco people, no?


So what, indeed (not sure what you mean)


True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you.


Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically.


I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry.


Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort.

The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?

It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.


That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it.


I tried some Julia plotting libraries a few years ago and they had apis that were bad for interactively creating plots as well as often being buggy. I don’t have performance problems with ggplot so that’s what I tend to lean to. Matplotlib being bad isn’t much of a problem anymore as LLMs can translate from ggplot to matplotlib for you.


I think being faster probably is important but it brings a bunch of challenges:

- the split pricing model makes it hard to tune model architecture for faster inference as you need to support fast and cheap versions.

- the faster the model is, the more it becomes a problem that they don’t ’understand’ time – they sit idle waiting for big compilations or they issue tools sequentially when they ought to have issued them in parallel.


Is it really a problem with the industry or is this the sort of thing where discussions go on forever on message boards where no one is in charge and people aren’t trying to work together to some actual goal, but where industry doesn’t suffer from the same problems?


I think it’s just very alien in that things which tend to be correlated in humans may not be so correlated in LLMs. So two things that we expect people to be similarly good at end up being very different in an AI.

It does also seem to me that there is a lot of variance in skills for prompting/using AI in general (I say this as someone who is not particularly good as far as I’m aware – I’m not trying to keep tips secret from you). And there is also a lot of variance in the ability for an AI to solve problem of equal difficulty for a human.


The OP is about security and you specifically ignore security when bringing up a common flamewar topic for which much discussion has already been had on this site. Perhaps such discussion could at least be limited to articles where it is less tenuously related.


I guess I bring it up in the sense that no matter how good their security is, it still sucks that Apple products are so hostile to their owners. It's hard to be impressed by their security work with the platform being what it is.

Security, privacy, and ownership aren't equally separated in my mind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: