People can correct me if I'm wrong, but I think the core logic behind OpenAI's valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.
Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.
Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.
It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.
Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
I think a large part of its valuation was it's ability to compete with search but thats understating it a bit. Unlike search it could/can be the platform users primarily interact with (ala a social media replacement) while having huge impacts on enterprise work and automation. I think its the combination of the ability for effectively one company to compete on every front in the modern web ecosystem thats contributed to the valuation.
It's also important to note the valuation is not just based off of its possible concrete economic implications in these areas but also future "unknown" possibility ( I.E. whatever "agi" means to investors ). Thats not to say I believe it's possible to achieve this but rather a huge part of Sam Altman's job is increasing valuation through unfounded claims of AGI's possibility and possible impact.
Yeah to zoom out, I think it was less specifically search and more generally: There was the PC, the winner became a behemoth. Then was Search, the winner became a behemoth. Then smartphones, the winner became a behemoth, Then there was social media and the winner became a behemoth.
The logic was basically "AI is going to be the next thing. The winner is going to be massive, let's back the person who looks best placed to do that". To be fair, it's probably correct. The people betting on OpenAI probably have plenty of money in Google shares and almost certainly have a share of Anthropic, grok, you name it. Most of them will go to 0, but the 1 winner could pay off. I'm not sure even 1 will pay off.
I've almost forgotten about AGI, that was suppose to be the reason for the valuations and all the hope/fear. Then, it just sort of went away and AI turned into the Software Developer doomsday machine. We're on month 4 since the models got really good at code and we were all going to be out of a job in 6 months. I guess we only have 2 more months of employment left /s
"Google had the best search engine, it became a centre of gravity..."
Almost no one made serious attempts at competing with Google. And not because of network effects or any other hard blocker.
In the early 2000s, the industry just wasn't mature enough to heavily fund serious competition.
By the 2020s the industry has funding and founders ready to jump on any huge opportunity that presents itself.
There are of course downsides, but this competitive landscape in AI seems like a huge net win for users in terms of lower costs and faster progress.
that's been my feeling for a while now. Google just has to keep up while OpenAI and Anthropic go bankrupt. I can see MSFT and Amazon eventually consuming OpenAI and Anthropic respectively when the money runs out but I still think Google is the eventual winner. I also have been pointing out that Apple making a deal with Google vs trying to do it on their own is another vote in that direction.
I'm just sad Google was intent on ruining their own product, whether removing + operator (seriously - Google+ is not an excuse, I don't care if it conflicts with search, don't do that) or some of their political censorship
And as we all know, if you're smart enough to get root access, your neighbours children playing football in the street should be subject to the risk of you driven a car that claims to have full self driving with custom code on it.
This is just Apple saying "We own all user compute now". Yeah you guys can fight over data centres. But every device that a user physically has will be an Apple device. They've now got the full range of price points from low cost to prosumer, and they've got the software stack to back it up so you can have your sales staff running neos logging in to their CRM, engineers running their Mabcook Pros.
It's kind of insane the advantage Apple Silicon has brought along with the brutal price competition PC sales. The only question I have is whether this touches the sides. That is to say - they sell a billion iPhones, is the consumer laptop and low end business sales enough to bump the numbers. They're thinner margins, and that market has to some extent been on a downward trend (which is why the stock market is running to data centres where the compute actually happens).
At the end of the day the core business is throwing off tonnes of money and is run fine. Would it be better not to throw billions at the next cool thing? Who knows. Probably. But Google does the same thing and they've actually built some cool stuff.
I was at Intel for a while and there was one glaring problem - they have one product that spins off a huge amount of cash. This means a few things: First, that one product is really where the things that matter happen. But second, they have all this money and they don't know what to do with it, they can't spend it all on their core product because that looks terrible - they're already throwing off money, investing more probably just makes your company look bad (you're spending more to get the same revenue). SO instead you have to take that money and make bets. But not just any bets. You need a bet that (a) matters if it pays off, and (b) looks favourable compared to the core business. So you buy Mcafee and Altera and MobilEye, 5G was the future once...
So to take the Meta example, they need something that is going to have revenue upside similar to Meta advertising revenue (one of the most profitable things in the universe), and that has better margins that the advertising business (basically impossible).So the only logical thing to do is to make grotesquely large bets on things that are extremely speculative. You can't bet on things that are well known - because nothing known has the properties from earlier that you're looking for, and you can't bet small because you've got to convince people you're the pay off is of a similar size to your existing business.
In Intel's case they lost focus on the core business and so that died and their other bets didn't matter because the core business was dead. With Meta the core business in't dead, but it's only a matter of time before it's seriously threatened and so they're going to attack that threat with everything they've got - and they have a tonne of resources.
But Google actually knows how to do research and how to apply it to products. Meta's AI research hasn't produced anywhere near as many state of the art products /revolutionary achievements.
Since we're comparing to Meta, you just have to look at the state of their publicly facing products that feature AI. Google has better AI models (Gemini, Nanobanana) and they've integrated them successfully into way more products than Meta has.
Meta spends a lot of money on AI research with little to show for it. As imperfect as Google may be, they're still doing much better.
Google knows how to do research - and at the very least lets other people figure out the products, and then becomes the #3 or #4 player.
Both GCP and Gemini are products of this. Modern cloud was arguably built by Google (think Chubby, GFS, Bigtable as building blocks) - they just spent 10 years ceding it to Amazon before competing.
I was thinking more of their primary revenue source / money printer being their ads business like Meta then they also spend billions from it on all kinds of other bets.
In 2026 we need to update our mental model of Google. Google has been wildly successful at adding diversification. Around 40% of Google’s profit (depending on the quarter) comes from non-search income.
They build a wildly successful cloud platform, they’re expanding their subscription services, they’ve got enterprise offerings, etc
The trick is that Google accepted that none of their other business would likely have the margins and volume that search has, but they did it anyways.
They already attacked it with everything they've got lmao
As in, in 2012. They outright replaced people's email addresses in their profile (makes it harder to reach people outside the walled garden, makes it harder to transfer your credentials to a competing service) and I've heard Google+ links got blocked
Zuckerberg is many things, not everything he's accused of (Trump/Cambridge Analytica) might be entirely accurate but he is at least partly a bit of a scumbag
I was going to say I disagree, that I think that at least some level of discussion on HN about important things going on is important. Israel is actually a tech powerhouse and a lot of this is seriously shaping the defence technology policy and is telling a lot about how power dynamics actually can play out.
Having said that, my settings show me all comments that are flagged. HN is apparently not capable of having a respectful conversation about this. Almost anything expressed on the actual topic has been flagged. The only thing left are comments rules lawyering to say we shouldn't discuss the topic at all.
It's kind of an indictment of the users of HN. It might be the right move to remove the article, but it becomes the right move because the users of this site can't be trusted to actually conduct a conversation about it.
I feel something very similar. I have strong views that what Israel is doing is wrong. But I look around at our politics (in the UK), and there is such a well oiled Israeli PR operation that is very happy making career ending accusations that talking publicly about this is actually quite dangerous (Not helped by the loonies who are, and have always been disgusting anti-semites). And you look at our politician's stance on it - and the career of people like Lord Walney, and it's clear we're in a very dangerous place. I think there is a very wide gap between what the average British person actually believes about Israel and what is happening to the Palestinians, and the acceptable positions you can express in Westminster. I also fear that once the dam breaks, and it's no longer the case, that the swing back against Israel is going to be quick harsh, and that's difficult because I have friends and family in Israel - I would like to see Israel be a free and open liberal democracy that shares what used to be western values, but maybe we're too late for that.
I think a lot of the pushback comes down to your attitude. The way you're talking about AI is like how the crypto bros talked about bitcoin. Just being very insistent on your point of view is a red flag. Either you can present new data to convince people, or your insistence will just look like it's emotional rather than rational.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
Well what do you mean by "works" the guys on twitter screaming about "Have fun being poor", were by and large just trying to scam you. Like, I could phone up your grandmother and convince her she's got a virus and she needs to transfer her life savings to me before the hackers get it, that could make me rich, does that "work". I don't know what Crypto actually worked for beyond - creating a target rich environment for scammers, a neat way to buy drugs, and good way for criminals and rogue nations to launder money.
This seems odd. I think broadly there are two ways of structuring how you interact with AI agents:
* The first is where there's you and your computer, and you're doing pre-AI work. You hit some hotkey and pass off some task to AI.
* The second - and where I think we should probably be going, is there's you and you interact with the an agent. You aren't handing the Q4 report off to the agent, the agent is bringing the Q4 report to you.
I think the first scenario is trying to pry agentic work into legacy workflows. It will be more powerful when we simply go straight to the second, where orchestration and interaction with your agents is the interface.
This article is answering a different question to what it is asking. It's asking "What is the most effective strategy to freeze your eggs if you're absolutely certain you will need to".
The reason women freeze their eggs in their early 30s is because they still have a good chance for it to be effective and they now have a strong idea they'll need to. You don't have that second piece of information at age 19.
Or to be specific: What is the size of the cohort of women you are expecting to freeze their eggs at the age of 19, who will use those frozen eggs. How many of them will give birth to children without the help of IVF, and how many will choose never to have children.
I think this article is a good example of rationalism. Which is basically getting very mathsy about 1 specific very of the data, without viewing the data in the context of the decision that is being made.
For example, what is the percentage of women you expect to freeze their eggs at age 19, who you then expect to be unable to afford the $500 every year to keep those eggs frozen over the next decade?
Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.
Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.
It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.
Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
reply