I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.
I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.
Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.
If you want another side data point, most people I know both in Japan and Canada use some sort of an AI as a replacement for any kind of query. Almost nobody in my circles are in tech or tech-adjacent circles.
So yeah, it’s just everyone collectively devaluing human interaction.
Because the responses are often distilled down from the same garbage Google serves up, but presented as the opinion of Claude, whom she increasingly trusts.
I use Claude a lot. I have the most expensive Claude Max subscription both for my own consultancy and at client sites, separately. I'm increasingly close to an AI maximalist on many issues, so I'm not at all against extensive use of these models.
But it's not quick enough to of its own accord resort to verifying things before giving answers to be suitable as a general purpose replacement for Google unless you specifically prompt it to search.
Google search results: a dozen sponsored links; a dozen links to videos (which I never use -- I'd rather read than watch); six or seven pages with gamed SEOs; if you're lucky, what you actually want is far down near the end of the first page, or perhaps at the top of the second page; the other 700 pages of links are ... whatever. Repeat for our five times with variously tweaked queries, hoping that what you actually want will percolate up into the first or second page.
Claude: "Provide me links to <precise description of what you actually want". Result: 4 or 5 directly relevant links, most of which are useful, and it happens on the first query.
Claude is dramatically more efficient than Google Search.
> Claude: "Provide me links to <precise description of what you actually want". Result: 4 or 5 directly relevant links, most of which are useful, and it happens on the first query.
Which, as I pointed out, is not the point, as you're advocating exactly the kind of prompting I said wouldn't be a problem. It's not how she uses it.
Ah, that's a good call-out. I don't use Claude aside from in Cursor; I use ChatGPT for normal queries and it's pretty good about doing searches when it doesn't think it knows the answer. Of course it'll search when prompted, but it'll often search without prompting too. I just mistakenly assumed that your fiancée's usage of Claude implied Claude was actually searching as well.
Google search sucks now because it's been targeted by the spammers and content farms. Before that happened it was pretty good. LLMs will eventually be poisoned the same way, whether by humans or other LLMs.
Garbage in, garbage out + chat bots will be monetized which means they will show you things their ad partners want you to see vs what you actually want.
frankly I've found even chat gpt free to be more useful when looking for something - I'd describe what I'm looking for, what are must-have features, what I definitely don't mean, etc and it'll suggest a few things. This has rarely led to not finding what I'm looking for. it's absolutely superior to Google search these days for things that have been around a while. I wouldn't check the news with it.
> If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.
I don't think that's right. The owners will still own everything. If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.
>> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
> I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.
No, that doesn't surprise me at all. I'm basically just applying the logic of capitalism and automation to a new technology, and the same thing has played out a thousand times before. The only difference with AI is that; unlike previous, more limited automation; it's likely there will be no roles for displaced workers to move into (just like when engines got good enough there were no roles for horses to move into).
It's important to remember that capitalism isn't about providing for people. It's about providing for people with wealth to exchange. That works OK when you have full employment and wealth gets spread around by paying workers, but if most jobs disappear due to automation there's no mechanism to spread wealth to the vast majority of people, so under capitalism they'll eventually die of want.
What would they get from the plebs? Suppose we went through The Phools and so the plebs were exterminated, then what? Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.
> Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.
I don't think we'll have Star Trek economics, because that would be fundamentally fair and egalitarian and plentiful. There will still be resource constraints like energy production and raw materials. I think it will be more like B2B economics, international trade, with a small number relevant owners each controlling vast amounts of resources and productive capacity and occasionally trading basics amongst themselves. It could also end up like empires-at-war (which actually may be more likely, since war would give the owners something seemingly important to do, vs just building monuments to themselves and other types of jerking off).
Consider being a significant shareholder in the future as analogous to citizenship as it exists today. Non-owners will be persona non gratae, if they're allowed to live at all.
See also: Citigroup's plutonomy thesis[1] from 2006
tldr: the formal economy will shift to serving plutocrats instead of consumers, it's much more profitable to do so and there are diminishing returns serving the latter
Those nerds can now develop an AI robot to make love to their wives while they get back to blogging about accelerationism with all the time they freed up.
Making predictions on how it will turn out VS designing how it should be. Up til now, powerful people needed lots and lots of other humans to sustain their power&life. Thus that dependency gave the masses leverage. Now I'd like a society we're everyone is valued for being human and stuff. With democracies we got quite far in that direction. Attempts to go even further... Let's just say "didn't work out". And right now, especially in the US, the societal system seems to go back to "power" instead rules.
Yeah, I see a bleak future ahead. Guess that's life, after all.
In the "learn to love democracy and freedom" sense, sure, but in the economic sense? "Didn't work out" feels like a talking point stuck in 1991. Time has passed, China is the #2 economy in the world, #1 if you pick a metric that emphasizes material or looks to the future. How did they get there? By paying the private owners of our economy to sell our manufacturing base to them piece by piece -- which the private owners were both entitled and incentivized to do by the fundamental principles of capitalism. The ending hasn't been written, but it smells like the lead-up to a reversal in fortune.
As for our internal balance of power, we've been here before, and the timeline conveniently lines up to almost exactly 100 years ago. I'm hoping for another Roosevelt. It wasn't easy then, it won't be easy now, but I do think it's fundamentally possible.
This is the direct result of abandoning religion altogether and becoming a 100% secular society.
I am currently reading the Great Books of the Western World in order to maybe somehow find god somewhere in there, at least in a way that can be woven into my atheist-grown brain, and even after just one year of reading and learning, I can feel the merits.
Accepting Science as our new and only religion was a grave mistake.
why exactly do you need a deity to tell you to love your fellow man? Do you need god in your life to want to love your children? I think this is not quite right. I don't expect that the desire to create these tools independent of outcome in the valley is simply about greed , and for companies like anthropic, the ability to use AGI fear as a means to drive investment in themselves from VC class that lives the idea of obliterating human labor. We need less money in tech - we'll probably get it soon enough.
> why exactly do you need a deity to tell you to love your fellow man?
Because that is not a given, as shown by the entirety of human history. Without God, the only arguments for love, or what is right, is just what people think/feel/agree on at a certain time and place, which has a lot of variations and is definitely not universal.
> Do you need god in your life to want to love your children?
Most people don't need God to love their children, and the ones that don't might not be convinced otherwise by God.
That said, what do you do exactly for that love? Do you cheat and steal to guarantee their future over others? If not because of some "benefit to society" logical argument that would convince no-one, why would one even care about that and not exploit society for their own benefit?
Almost everyone loves themselves and their family above all others. Only God can tell you to love your neighbors and even your enemies.
There are still many societies around the world where most people are mostly self centered and you can see the results. You are taking for granted many values you have, as if you arrived to them logically and indipendently instead of learning them from your parents and a society that derived them from God for centuries.
Are we completely ignoring the tonnes of awful things people have done in the name of their god? Belief in a higher power doesn't automatically make you good/bad. The same is true of the inverse.
>Without God, the only arguments for love, or what is right, is just what people think/feel/agree on at a certain time and place, which has a lot of variations and is definitely not universal.
Lets ignore that laws exist for a second....Does god say everybody in Manhattan should reserve the left side of the escalators for people walking up them, and the right should be left for people just standing and escalating? No, but somehow a majority of the population figured it out. Society still has rules, both spoken and unspoken, whether god is in the picture or not
If you are serious about these questions, read Dominion by Tom Holland. He makes a very long and thorough historical case that Christianity has contributed more good than bad over the centuries. (I don’t know what comparable works are for other religions.)
Decoupled from the social systems built by organized religion, our “elites” are taking society to a horrific place.
Could you build up traditions and social structures over time without any deity that would withstand the hedonism and nihilism driving modern culture? Perhaps. But it would require time measured in generations we don’t have.
At the very least , it takes generations to build up shared traditions and values across a society. If you want an atheistic version of that, you would need to start now and it’s going to take a long time to build.
> If you want an atheistic version of that, you would need to start now and it’s going to take a long time to build.
Why do you think we would have to start from zero? Even in highly religious countries not all traditions and values are tied to religion, and even those that are can be disconnected from their religious roots.
I would just say the success rate hasn't been very high so far.
The current evidence suggests that as people have become less religious, society has become more fragmented and individualistic, with less shared values and less sense of community or family.
Shared religious community has been replaced by quality time with screens, not meeting in person in some alternative atheistic community.
Writing off an entire facet of life as toxic, is toxic.
Anything taken to extreme can be harmful, but some of the most grounded and successful (as in, living well) people I know are those with a self-aware religious foundation to lean on. People may bring up examples of religious cults as a reason to discard all religion, but surely the same could be said for the many secular cults. We shouldn't throw out the baby with the bathwater, as they say.
We don't need religion for that, humanism exists a way of living for instance.
I don't think (most) people treat science "as a religion".
Some tech leaders seem to have swap Ayn Rand (who if you look at the early days definitely acted like a cult leader), to this AI doomer cult, and as a result seem to be acting terribly.
Religion was much wider spread in the 1800s, but that didn't stop industrialists acting terribly.
I can't say I'm shocked. Disappointed, maybe, but it's hardly surprising to see the sociopathic nature in the people fighting tooth and nail for the validation of venture capitalists who will not be happy until they own every single cent on earth.
There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.
AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.
I think, like the Bill Gates haters who interpret him talking about reducing the rate of birth in Africa as wanting to kill Africans, you're interpreting it wrong.
The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.
Sending all the useless horses to glue factories in that time was so prevalent it was a cartoon trope. The other trope being men living in flop houses, and towns having entire sections for unemployable people called skid row.
The AI people point to post 1950s style employment and say 'people recovered after industrial advance' and ignore the 1880s through the 1940s. We actually have zero idea if the buggy whip manufacturer ever recovered or just lasted a year in skid row before giving up completely, or lived through the 2 world wars spurred by mechanisation.
Horses were killed more often for meat that was used in dog food than for glue.
I did a deep research into the decline of horses and it was consistent with fewer births, not mass slaughter. The US Department of Agriculture has great records during this time, though they’re not fully digitized.
In this analogy, horses are jobs, not humans; you could argue there's not much of a difference between the two, because people without jobs will starve, etc., but still, they're not the same.
Why make the analogy at all if not for the implied slaughter. It is a visceral reminder of our own brutal history. Of what humans do given the right set of circumstances.
There is, at least, a way to avoid people without jobs starving. Whether or not we'll do it is anyone's guess. I think I'll live to see UBI but I am perphaps an optimist.
You'd have to time something like UBI with us actually being able to replace the workforce -- The current LLM parlor tricks are simply not what they're sold to be, and if we rely on them too early we (humanity) is very much screwed.
One would argue in a capitalist society like ours, fucking with someone's job at industrial scale isn't awfully dissimilar from threatening their life, it's just less direct. Plenty more people currently are feeling the effects of worsening job markets than have been involved in a hostage situation, but the negative end results are still the same.
One would argue also if you don't see this, it's because you'd prefer not to.
If we had at least a somewhat functioning safety net, or UBI, or both, you'd at least have an argument to be made, but we don't. AI and it's associated companies' business model is, if not killing people, certainly attempting to make lots of lives worse at scale. I wouldn't work for one for all the money in the world.
UBI will not save you from economic irrelevance. The only difference between you and someone starving in a 3rd world slum is economic opportunity and the means to exchange what you have for what someone else needs. UBI is inflation in a wig and dark glasses.
population projections they already predict that prosperity reduces population
and even if AI becomes good enough to replace most humans the economic surplus does not disappear
it's a coordination problem
in many places on Earth social safety nets are pretty robust, and if AI helps to reduce cost of providing basic services then it won't be a problem to expand those safety nets
...
there's already a pretty serious anti-inequality (or at least anti-billionaire) storm brewing, the question is can it motivate the necessary structural changes or just fuels yet another dumb populist movement
I think the concerns with UBI are (1) it takes away the leverage of a labor force to organize and strike for better benefits or economic conditions, and (2) following the block grant model, can be a trojan horse "benefit" that sets the stage for effectively deleting systems of welfare support that have been historically resilient due to institutional support and being strongly identified with specific constituencies. When the benefit is abstracted away from a constituency it's easier to chop over time.
I don't exactly know how I feel about those, but I respect those criticisms. I think the grand synthesis is that UBI exists on top of existing safety nets.
In practice there are a few strong local unions (NY teachers, ILA (eastern longshoremen)), but in general it doesn't help those who are no employed. (Also when was the last general strike that achieved something ... other than getting general strikes outlawed?)
... also, one pretty practical problem with UBI is that cost of living varies wildly. And if it depends on location then people would register in a high-CoL place and live in a low-CoL place. (Which is what remote work already should be doing, but many companies are resistant to change.)
In theory it makes sense to have easy to administer targeted interventions, because then there's a lot of data (and "touch points" - ie. interaction with the people who actually get some benefit), so it's possible to do proper cost-benefit analyses.
Of course this doesn't work because allocation is over-overpoliticized, people want all kinds of means-testing and other hoops for people to jump through. (Like the classic prove you still have a disability and people with Type I diabetes few years have to get a fucking paper.)
So when it comes to any kind of safety net it should be as automatic as possible, but at least as targeted as negative income tax. UBI might fit depending on one's definition.
... maybe? it depends on how it's implemented. (and that depends on the legislative purpose.) the usual equality vs equity thing comes to mind. (negative income tax has probably the most desirable properties for this as far as I know.)
Somebody should try a smart populist movement instead. My least favorite thing about my favored (or rather least disfavored) party is that we seem to believe “we must win without appealing to the populace too directly, that would simply be uncouth.”
One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.
Someone said during the hype of "self-driving cars is the future!" that ICE/driver-driven cars will go the way of the horse: they'll be well-cared, kept in stables, and taken out in the weekends for recreation, on circuits but not on public roads..
When the Covid-truther geniuses "figured out" that "Bill Gates was behind Covid", they pulled out things like this as "proof" that his master plan is to reduce the world's population. Not to reduce the rate of increase, but to kill them (because of course these geniuses don't understand derivatives)...
Ah, got it. This sounds like more of a "repugnant conclusion" sort of problem where if you care about the well being of people who exist, then it is possible to have too large of a population.
We don't know what the author had in mind, but one has to really be tone deaf to let the weirdness of the discussion go unnoticed. Take a look at the last paragraphs in the text again:
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.
The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
Horses typically live between 25 to 30 years. I agree with OP that most likely those horses were not decimated (killed) but just died out and people stopped mass breeding them. Also as other noticed chart shows 'horses PER person in US'. Population between 1900 and 1950 increased from 1.5B to 2.5B (globally but probably similarly almost 70% increase in US).
I think depends what do you worry about:
1) `that human population decrease 50-80%`?
I don't worry about it even if that happen. 200 years ago human population was ~1 B today is ~8 B. At year 0 AD human population was ~0.250 B. Did we 200 years ago worry about it like "omg human population is only 1 B" ?
I doubt human population decrease 80% because of no demand for human as workforce but I don't see problem if it decrease by 50%. There will short transition period with surplus of retired people and work needed to keep the infrastructure but if robots can help with this then I don't see the problem.
2) `That we will not be needed and we will loose jobs?`
I don't see work like something in demand. Most people hate their jobs or do crappy jobs. What do people actually worry about that they will won't get any income. And actually not even about that - they worry that they will not be able to survive or be homeless. If there is improvement in production that food, shelter, transportation, healtcare is dirty cheap (all stuff from bottom maslov piramid) and fair distribution on social level then I also see a way this can be no problem.
3) `That we will all die because of AI`
This I find more plausable and maybe not even by AGI but earlier because of big social unrests during transition period.
As someone who raises horses and other animals, I can say with pretty high certainty that most of the horses were not allowed to "retire". Horses are expensive and time-consuming to care for, and with no practical use, most horses would have been sent not to the glue factory but (at that time) to the butcher and their non-meat parts used for fertilizer.
Yeah, I agree with what you said. It's not about the absolute number of people, but the social unrest. If you look at how poor we did our job at redistribution of wealth so far, I find it hard to believe that we will do well in the future. I am afraid of mass pauperisation and immiseration of societies followed by violence.
What's more important - "redistribution of wealth" or simply reducing the percentage of people living in abject poverty? And wouldn't you agree that by that measure, most of the world, including its largest countries, have done quite a good job?
From 1990 to 2014, the world made remarkable progress in reducing extreme poverty, with over one billion people moving out of that condition. The global poverty rate decreased by an average of 1.1 percentage points each year, from 37.8 percent to 11.2 percent in 2014.
I think the phrase "fair distribution on social level" is doing a lot of work in this comment. Do you consider this to be a common occurrence, or something our existing social structures do competently?
I see quite the opposite, and have very little hope that reduced reliance on labor will increase the equability of distribution of wealth.
Doesn't matter. The countries with most chaos and internal strife gets a lot of practice fighting wars (civil war). Then the winner of the civil war, who's used to grabbing resources by force, and the one that has perfected war skills due to survival of the fittest, goes round looking for other countries to invade.
Historically, advanced civilizations with better production capabilities don't necessarily do better in war if they lack "practice". Sad but true. Maybe not in 21st century, but who knows.
Yeah none of that fever dream is real. There's no "after" a civil war, conflicts persist for decades (Iraq, Afghanistan, Syria, Myanmar, Colombia, Sudan).
Check this out - https://data.worldhappiness.report/chart. The US is increasingly a miserable place to live in, and the worse it gets the more their people double down on being shitty.
Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone: Chinese dream.
Well, in this case corporations stop buying people and just fire them instead of letting them retire. Or an army of Tesla Optimi will send people to the glue factory.
That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.
> And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."
I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think. Then there's always the fact that the people who control the world have no emotional attachment to you either. They see you as something closer to a horse than their kin. I imagine a healthy dose of self-dehumanization actually helps us understand the current trajectory of our future. And people tend to vastly overvalue our "humanity" anyway. I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
I wish I knew what you were so I could say "one of the many terrible things about __" about you. Anyway, I think you have an unhealthy emotional attachment to your emotions.
> I wish I knew what you were so I could say "one of the many terrible things about __" about you.
I'm a software engineer, so I beat you to it.
> I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think.
And the problem is taking that too far, and doing it too much. It's a tactic "to help us see what's really happening," but it's wrong to stop there and forget things like values, interests, and morality.
> And people tend to vastly overvalue our "humanity" anyway.
WTF, man.
> I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
Who cares what "the ones that displaced horses" thought? You're the horse in that scenario,and the horse cares. Another obnoxious software engineer problem is taking the wrong, often self-negating, perspective.
Yes, the robber who killed you to steal your stuff probably didn't mind you died. So I guess everything's good, then? No.
> Anyway, I think you have an unhealthy emotional attachment to your emotions.
Emotions aren't bad, they're healthy. But a rejection of them is probably a core screwed-up belief that leads to "aloof galaxy-brain, passively observing humanity from afar" syndrome.
There's probably parallel to the kind of obliviousness that gets you the behavior in the Torment Nexus meme ("Tech Company: "At long last, we have created the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus.'") i.e. "Software Engineer: 'At long last, I've purged myself of emotion and become perfectly logical like Lt. Cmdr. Data from the classic sci-fi Logical Robot Data Wants to Be Human and Feel Emotions."
Thus strikes more in the tone of Orwell who used a muted emotional register to elicit a powerful emotional response from the reader as they realize the horror of what’s happening.
> Have you ever thought that you would see a chart showing [...]
Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.
It's not remotely a surprise to see this amp up with AI.
Yeah, I am familiar with these works of art and probably most people are. However, they were mostly speculative. Now we are facing some of their premises in the real world. And the guys who push the technology in a reckless way seem to notice this, but just nod their heads and carry on.
At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
Works of art, works of predictive programming, life imitating art -- what's the difference, if in the end the artistic predictions come true?
People have been thinking apocalyptic thoughts like these since.. at least Malthus's An Essay on the Principle of Population (1798). That's 227 years if you're keeping score. Probably longer; Malthus might only have been the first to write them down and publish them.
> Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population?
Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU
Oh, _now_ computer industry people are worried? Kind of late to the party.
Computerization, automation and robotics, document digitization, the telecoms and wireless revolution, etc. have been upending peoples' employment on a massive scale since before the 1970s. The reaction of the technologists has been a rather insensitive "adapt or die", "go and retrain", and analogies to buggy whip manufacturers when the automobile became popular. The only reason people here suddenly give a hoot is because they think the crosshairs are drifting towards them.
It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71
It's been a decade or so but I'm mostly called "resource" at work, as in Human Resource. Barely collegue, comrade, co-worker... just a resource, a plug in the machine that needs to be replaced by an external resource to improve profit margins.
You can kind of separate the technical side of what will likely happen - AI get smarter and can do the jobs - with how we deal with that. Could be heaven like with abundance and no one needs to work, or post apocalyptic dystopia or likely somewhere in the middle.
We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.
Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
It isn't just AI. So much of the US "Tech"/VC scene is doing outright evil stuff, with seemingly zero regard for any consequence or even a shred of self awareness.
So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.
Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.
Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.
In the US at least, there is a Congress incapable of taking action and a unilateral President fully on the side of tech CEOs with the heaviest investments in AI.
There is no evidence supporting short term optimism. Every indication the large corporations dictating public policy will treat us exactly like those horses when it comes to economic value.
My experience so far has been that the knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
In this instance, in particular, I wouldn't expect our preferences to bear any relevance.
> knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.
I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.
I took the article as meaning white collar tech jobs that will go away, so those people will need to pivot their career, not humans.
However, it does seem like time for humanity to collectively think hard about our values and goals, and what type of world and lives we want to have in an age where human thought, and perhaps even human physical labor are economically worthless. Unfortunately this could not come at a worse time with humanity seemingly experiencing a widespread rejection of ideals like ethics, human rights, and integrity and embracing fascism and ruthless blind financial self interest as if they were high minded ideals.
Ironically, I think tech people could learn a lot here from groups like the Amish- they have clearly decided what their values and goals are, and ruthlessly make tech serve them, instead of the other way around. Despite stereotypes, Amish are often actually heavy users of, and competent with modern tech in service of making a living, but in a way that enforces firm boundaries about not letting the tech usurp their values and chosen way of life.
It was always like this. Look at the history, and sometimes quite recent - people were always treated like a tool - for getting rich, for getting in power, to conquer other countries, to serve them.
It's interesting though how the narrative is all bright-eyed idealism, make the world a better place, progress, etc until at some point the masks go off and suddenly it's "always has been, move along, nothing to see here"...
For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.
> Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
Not sure if by accident or not, but that’s what we are according today’s “tech elite”.
Therefore, the most profitable disposition for this dubious form of capital is to convert them into biodiesel, which can help power the Muni buses
I think we have a bunch of people in the United States who see what we elected for leadership and the choices he made to advise him, and they have given up all hope. That despondent attitude is infusing their opinions on everything. But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
And low information business leaders will attempt to do all the awful things described here and the free market will eliminate them from the game grid one horrible boss at a time. But if you surround yourself with the AI doomers and bubblers, how will you ever encounter or even consider positive uses of the technology? What an awful place to work Anthropic must be if they truly believe they are working on the metaphorical equivalent of the Alpha Omega bomb. Spoilers: they're not.
Meanwhile, in the rest of the world, many look forward to harnessing AI to ameliorate hunger, take care of the elderly, and perform the more dangerous and tedious jobs out there. Anthropic guy needs to go get a room with Eliezer Yudkowsky. I guess the US is about get horsed by the other 96% of the planet.
Go ahead, compare me to a horse, a gasoline engine, or even call me a meatbag. Have we become little more than Eloi snowflakes to be so offended by that?
But I guess as long as an electoral majority here continues to cheer on one man draining the juice of this country down to a bitter husk, the fun and games will continue.
> But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
At this point in time, his whimsy is the only thing holding back younger, more extreme acolytes from doing what they want. Once he's gone, lol.
Machines to “take care of the elderly” is one of the worst possible uses of this technology. We desperately need more human interaction between the old and the young, not less.
Yes. Follow in the path of the tech leaders. They are optimists. They totally aren't building doomsday bunkers or trying to build their data centers with their own nuclear power plants to remove them from society and create self contained systems. Oh wait. Crap...
American tech leaders are just as bad, leading the charge straight into the abyss. But if you close your mind to the rest of the world, I can see why you'd see a 0 1 choice here. That's all the corporate media and influencers write these days, all the way from Paul Krugman to Corey Doctorow. And let's not even get started on the Three Men and an ASIC house of AI circle jerkers.
I mean if you're the sort that thinks Greta Thunberg and Eliezer Yudkowsky are agents of the Antichrist, it's long overdue to touch grass. And I don't think he believes that, but I think he thought people were stupid enough to buy it so he ran with it. Can't blame him for trying!
But given the right's hatred of renewables and the left thinks nuclear power plants can explode like atomic bombs, I'd be pushing for gas and nuclear to power my data centers too.
TLDR: you're being fed a false narrative that this is a 0 1 choice, but I guess it will take the rest of the world to demonstrate that not the US.
Money isn't the only thing a job provides. Those are all professions that provide a sense of meaning, so monetary compensation doesn't need to be as high to attract and keep people.
Yeah that’s part of the gaslighting. It takes 2 seconds to realize it’s wrong, but people parrot it like Fox News talking points.
Every profession attracts people who enjoy it, eg lawyers tend to enjoy adversarial debate. Lots of countries don’t treat their teachers like shit. It’s a choice.
Yeah I'm sure that felt good to say but it's fucking bullshit. I'm not a software dev because it's my calling, I'm a software dev because I'm pretty sure I wouldn't make any money doing anything else. I only briefly considered teaching as a profession (knowing I'd be functionally poor as a result) despite being quite sure I'd enjoy it (I tutored and instructed all through high school and college) because the desire to make money combined with the current state of schools ultimately won out. Other people who are more passionate about teaching and/or don't have the same skills as me would have gone the other way.
There will always be fucking teachers, pilots, etc, because people WANT to be those things, and it's gonna take more than "nuh uh, Fox News" to dislodge that belief.
That said, I didn't say it isn't a public policy choice, I'm saying the "passion factor" is the reason they are ABLE to offer these jobs at low wages.
There's an affordability crisis hitting the whole country and getting worse (inflation). It's like a guy who has been stabbed saying "I am ABLE to walk". Not for much longer!
Check this out - https://data.worldhappiness.report/chart. Data shows the whole US is increasingly a miserable place to live in. And -- here's the best part -- the worse it gets, the more Americans double down on being shitty selfish evil fucks. Look at you trying very hard not to understand "teachers should be able to afford housing".
Of course it backfires. Crappy educational system => Americans don't understand tariffs => They vote for a guy who trashes the whole economy. That's called karma.
Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone.
I would say that it is bad when it has large derivative (positive or negative). However, the problem is not about >number of human beings< but about making agency that existing people have obsolete.
We're making slave morality obsolete. The sense of self dependent entirely on external performance. Reality is making it so that we will have very little value in that domain very soon. Because everything we do now will be done better faster cheaper by the machines.
That's going to be a pretty rough transition. I think the economic aspects will be pretty straightforward by comparison to the psychological upheaval!
It's bad if it goes down by more than about 1.2% per year. That would mean zero births, present-day natural deaths. Of course zero births isn't presently realistic, and we should expect the next 10-30 years to significantly increase human lifespan. If we assume continued births at the lowest rates seen anywhere on the planet, and humans just maxing out the present human lifespan limit, then anything more than about a 0.5% decrease means someone is getting murked.
That depends on what you think jobs and the economy are for, generally.
If you think the purpose of the economy is for the economy to be good then it doesn't matter. If you think it exists to serve humanity then... You really wouldn't need to ask the question, I imagine.
The number of humans doesn't exactly serve humanity though either. Both of those variables are mostly irrelevant to actual human happiness and flourishing. In fact as far as I can tell they are actively harmful, for several different reasons.
it's a con job and strawman take. if we collectively think token generators can replace humans completely, well then we've already lost the plot as a global society
> I may have developed some kind of paranoia reading HN recently
My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.
I don't think that bots have taken over HN. I meant that the frontier of the tech research brags about their recklessness here and the rest of us have become bystanders to this process. Gives me goosebumps.
I wouldn't read too much into it. Anytime I post something silly and stupid, it becomes the top comment. Anytime I post something important, I get downvotes. That's just normal. I think that's just human nature...
And the votes are pretty random too. Sometimes it'll go from -5 to +10 in the span of a few hours. Just depends on who's online at the time...
And yet don't they pull on our heartstrings? Isn't that funny? A random number generator for the soul...
Honestly I can't tell if your incredulity is at the method of analysis for being tragically mistaken or superficial in some way, at the seemingly dehumanizing comparison of beloved human demonstrations of skill (chess, writing) to lowest common denominator labor, or the tone of passive indifference to computers taking over everything.
I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.
I'd like to note here that the lifespan of a horse is 25-30 years. They were phased out not with mass horse genocide, but likely in the same way we phase out Toyota Corollas that have gotten too old. Owners simply didn't buy a new horse when the old one wore out, but bought an automobile instead.
Economically it is no different from the demand for Mitsubishi's decreasing except the vehicle in this case eats grass, poops, and feels pain.
If you want to analogize with humans, a gradual reduction in breeding (which is happening anyways with or without AI) is probably a stronger analogy than a Skynet extinction scenario.
Truth is this is no different than the societal trends that were introduced with industrialization, simply accelerated on a massive scale.
The threshold for getting wealth through education is bumping up against our natural human breeding timeline, delaying childbirth past natural optimal human fertility ages in the developed world. The amount of education needed to achieve certain types of wealth will move into the decades causing even more strain on fertility metrics. Some people will decide to have more kids and live off purely off whatever limited wellfare the oligarchs in charge decide is acceptable. Others will delay having children far past natural human fertility timespans or forgo having children at all.
If we look at it this way, a reduction in human population would be contingent on whether you think human beings exist and are bred for the purposes of labor.
I believe most people would agree with me that the answer is NO.
The analogy to horses here then is not individuals, but specific types of jobs.
Honestly, the answer for me is yes. I had expected it. The signs were in all the comments that take the market forces for granted. All the comments that take capitalism as a given and immutable law of nature. They were in all the tech bros that never ever wanted to change anything but the number of zeros in their bank account after a successful exit.
So yes, I had that thought you are finally having too.